Test Report: Docker_Linux_containerd_arm64 19790

                    
                      b9d2e2c9658f87d0032c63e9ff5f9056e8d14f14:2024-10-14:36644
                    
                

Test fail (2/329)

Order failed test Duration
29 TestAddons/serial/Volcano 210.99
303 TestStartStop/group/old-k8s-version/serial/SecondStart 382.94
x
+
TestAddons/serial/Volcano (210.99s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:819: volcano-controller stabilized in 50.587717ms
addons_test.go:811: volcano-admission stabilized in 52.430697ms
addons_test.go:803: volcano-scheduler stabilized in 52.662604ms
addons_test.go:825: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-9mm8c" [309bfcc6-1067-4bb0-ae1f-db35265306bc] Running
addons_test.go:825: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003304602s
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-cg9fd" [349425ec-53ed-4a5d-9eb0-36376da305bf] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004057678s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-wbnfl" [64d2ce27-92d9-4f05-9203-d834f9218f51] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00352049s
addons_test.go:838: (dbg) Run:  kubectl --context addons-569374 delete -n volcano-system job volcano-admission-init
addons_test.go:844: (dbg) Run:  kubectl --context addons-569374 create -f testdata/vcjob.yaml
addons_test.go:852: (dbg) Run:  kubectl --context addons-569374 get vcjob -n my-volcano
addons_test.go:870: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [5f661ed9-bfaa-4279-b444-455e0e3dc980] Pending
helpers_test.go:344: "test-job-nginx-0" [5f661ed9-bfaa-4279-b444-455e0e3dc980] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:870: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-569374 -n addons-569374
addons_test.go:870: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-10-14 13:44:50.076091513 +0000 UTC m=+368.742123951
addons_test.go:870: (dbg) Run:  kubectl --context addons-569374 describe po test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-569374 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-11136950-cd41-4645-b8e8-43dc34a5a945
volcano.sh/job-name: test-job
volcano.sh/job-retry-count: 0
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p8npv (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-p8npv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:870: (dbg) Run:  kubectl --context addons-569374 logs test-job-nginx-0 -n my-volcano
addons_test.go:870: (dbg) kubectl --context addons-569374 logs test-job-nginx-0 -n my-volcano:
addons_test.go:871: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-569374
helpers_test.go:235: (dbg) docker inspect addons-569374:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324",
	        "Created": "2024-10-14T13:39:26.440751392Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8804,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-14T13:39:26.604319716Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324/hosts",
	        "LogPath": "/var/lib/docker/containers/6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324/6e33985cf6ca030411909b96e9244e25c717097a9e9274c7f3744426ee732324-json.log",
	        "Name": "/addons-569374",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-569374:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-569374",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9fc876b2b8bb666a82b0622d52a096247e8b9bbaee92140537a6adb6d73588c7-init/diff:/var/lib/docker/overlay2/d8164b8c8c613df332ab63ecaf21de80c344b1fe32149b3955f3e5228a19c126/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9fc876b2b8bb666a82b0622d52a096247e8b9bbaee92140537a6adb6d73588c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9fc876b2b8bb666a82b0622d52a096247e8b9bbaee92140537a6adb6d73588c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9fc876b2b8bb666a82b0622d52a096247e8b9bbaee92140537a6adb6d73588c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-569374",
	                "Source": "/var/lib/docker/volumes/addons-569374/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-569374",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-569374",
	                "name.minikube.sigs.k8s.io": "addons-569374",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dda78ae35bba135504c0c20bdaf509ee268738842ef342ff8ed954a1f782dd9b",
	            "SandboxKey": "/var/run/docker/netns/dda78ae35bba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-569374": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "2511f4a1385bdd1346e9d1b4f1fb55a4edb25fc670f6591f1c4bb3a13ffdc4aa",
	                    "EndpointID": "0f4c0f74dd43975fbb2514e28aeca103aaeb7066c20f0fc0039291ea1b59adae",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-569374",
	                        "6e33985cf6ca"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-569374 -n addons-569374
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 logs -n 25: (1.565182139s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-191063   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-191063              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-191063              | download-only-191063   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | -o=json --download-only              | download-only-532133   | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-532133              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:39 UTC |
	| delete  | -p download-only-532133              | download-only-532133   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| delete  | -p download-only-191063              | download-only-191063   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| delete  | -p download-only-532133              | download-only-532133   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| start   | --download-only -p                   | download-docker-570495 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | download-docker-570495               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-570495            | download-docker-570495 | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| start   | --download-only -p                   | binary-mirror-153103   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | binary-mirror-153103                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:37259               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-153103              | binary-mirror-153103   | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:39 UTC |
	| addons  | disable dashboard -p                 | addons-569374          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-569374                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-569374          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC |                     |
	|         | addons-569374                        |                        |         |         |                     |                     |
	| start   | -p addons-569374 --wait=true         | addons-569374          | jenkins | v1.34.0 | 14 Oct 24 13:39 UTC | 14 Oct 24 13:41 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:39:01
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:39:01.901837    8306 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:39:01.902037    8306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.902064    8306 out.go:358] Setting ErrFile to fd 2...
	I1014 13:39:01.902082    8306 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:39:01.902475    8306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:39:01.903536    8306 out.go:352] Setting JSON to false
	I1014 13:39:01.904235    8306 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1293,"bootTime":1728911849,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 13:39:01.904304    8306 start.go:139] virtualization:  
	I1014 13:39:01.906491    8306 out.go:177] * [addons-569374] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:39:01.909399    8306 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:39:01.909463    8306 notify.go:220] Checking for updates...
	I1014 13:39:01.913755    8306 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:39:01.915858    8306 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:39:01.917554    8306 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 13:39:01.919425    8306 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:39:01.921480    8306 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:39:01.923745    8306 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:39:01.942804    8306 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:39:01.942924    8306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:02.018241    8306 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:02.008384095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:02.018355    8306 docker.go:318] overlay module found
	I1014 13:39:02.020290    8306 out.go:177] * Using the docker driver based on user configuration
	I1014 13:39:02.022256    8306 start.go:297] selected driver: docker
	I1014 13:39:02.022278    8306 start.go:901] validating driver "docker" against <nil>
	I1014 13:39:02.022299    8306 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:39:02.023167    8306 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:39:02.075148    8306 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-10-14 13:39:02.065399411 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:39:02.075347    8306 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:39:02.075568    8306 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:39:02.077549    8306 out.go:177] * Using Docker driver with root privileges
	I1014 13:39:02.079361    8306 cni.go:84] Creating CNI manager for ""
	I1014 13:39:02.079420    8306 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 13:39:02.079433    8306 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:39:02.079515    8306 start.go:340] cluster config:
	{Name:addons-569374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-569374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:02.081833    8306 out.go:177] * Starting "addons-569374" primary control-plane node in "addons-569374" cluster
	I1014 13:39:02.083558    8306 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1014 13:39:02.085327    8306 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:39:02.089471    8306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 13:39:02.089529    8306 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1014 13:39:02.089542    8306 cache.go:56] Caching tarball of preloaded images
	I1014 13:39:02.089565    8306 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:39:02.089638    8306 preload.go:172] Found /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 13:39:02.089649    8306 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I1014 13:39:02.090024    8306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/config.json ...
	I1014 13:39:02.090155    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/config.json: {Name:mk9953ad9fcec596bf7c9e9595a01f650cb6fca9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:02.106245    8306 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:39:02.106354    8306 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:39:02.106383    8306 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1014 13:39:02.106405    8306 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1014 13:39:02.106413    8306 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1014 13:39:02.106418    8306 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1014 13:39:19.306950    8306 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1014 13:39:19.306986    8306 cache.go:194] Successfully downloaded all kic artifacts
	I1014 13:39:19.307029    8306 start.go:360] acquireMachinesLock for addons-569374: {Name:mk5547283c60ce0d7b031ba9ef8954ca59e7bce9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 13:39:19.307158    8306 start.go:364] duration metric: took 109.333µs to acquireMachinesLock for "addons-569374"
	I1014 13:39:19.307192    8306 start.go:93] Provisioning new machine with config: &{Name:addons-569374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-569374 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1014 13:39:19.307274    8306 start.go:125] createHost starting for "" (driver="docker")
	I1014 13:39:19.309677    8306 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1014 13:39:19.309928    8306 start.go:159] libmachine.API.Create for "addons-569374" (driver="docker")
	I1014 13:39:19.309965    8306 client.go:168] LocalClient.Create starting
	I1014 13:39:19.310099    8306 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem
	I1014 13:39:19.588353    8306 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem
	I1014 13:39:20.826104    8306 cli_runner.go:164] Run: docker network inspect addons-569374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1014 13:39:20.842132    8306 cli_runner.go:211] docker network inspect addons-569374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1014 13:39:20.842222    8306 network_create.go:284] running [docker network inspect addons-569374] to gather additional debugging logs...
	I1014 13:39:20.842243    8306 cli_runner.go:164] Run: docker network inspect addons-569374
	W1014 13:39:20.856700    8306 cli_runner.go:211] docker network inspect addons-569374 returned with exit code 1
	I1014 13:39:20.856732    8306 network_create.go:287] error running [docker network inspect addons-569374]: docker network inspect addons-569374: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-569374 not found
	I1014 13:39:20.856752    8306 network_create.go:289] output of [docker network inspect addons-569374]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-569374 not found
	
	** /stderr **
	I1014 13:39:20.856856    8306 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:20.871980    8306 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001943ee0}
	I1014 13:39:20.872028    8306 network_create.go:124] attempt to create docker network addons-569374 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1014 13:39:20.872085    8306 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-569374 addons-569374
	I1014 13:39:20.940367    8306 network_create.go:108] docker network addons-569374 192.168.49.0/24 created
	I1014 13:39:20.940401    8306 kic.go:121] calculated static IP "192.168.49.2" for the "addons-569374" container
	I1014 13:39:20.940490    8306 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1014 13:39:20.954523    8306 cli_runner.go:164] Run: docker volume create addons-569374 --label name.minikube.sigs.k8s.io=addons-569374 --label created_by.minikube.sigs.k8s.io=true
	I1014 13:39:20.971927    8306 oci.go:103] Successfully created a docker volume addons-569374
	I1014 13:39:20.972016    8306 cli_runner.go:164] Run: docker run --rm --name addons-569374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-569374 --entrypoint /usr/bin/test -v addons-569374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1014 13:39:22.329196    8306 cli_runner.go:217] Completed: docker run --rm --name addons-569374-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-569374 --entrypoint /usr/bin/test -v addons-569374:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (1.357136735s)
	I1014 13:39:22.329224    8306 oci.go:107] Successfully prepared a docker volume addons-569374
	I1014 13:39:22.329238    8306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 13:39:22.329257    8306 kic.go:194] Starting extracting preloaded images to volume ...
	I1014 13:39:22.329322    8306 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-569374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1014 13:39:26.376701    8306 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-569374:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.047334151s)
	I1014 13:39:26.376735    8306 kic.go:203] duration metric: took 4.047474811s to extract preloaded images to volume ...
	W1014 13:39:26.376868    8306 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1014 13:39:26.376982    8306 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1014 13:39:26.426325    8306 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-569374 --name addons-569374 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-569374 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-569374 --network addons-569374 --ip 192.168.49.2 --volume addons-569374:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1014 13:39:26.755407    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Running}}
	I1014 13:39:26.778729    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:26.801500    8306 cli_runner.go:164] Run: docker exec addons-569374 stat /var/lib/dpkg/alternatives/iptables
	I1014 13:39:26.870464    8306 oci.go:144] the created container "addons-569374" has a running status.
	I1014 13:39:26.870497    8306 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa...
	I1014 13:39:27.574734    8306 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1014 13:39:27.606895    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:27.627830    8306 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1014 13:39:27.627848    8306 kic_runner.go:114] Args: [docker exec --privileged addons-569374 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1014 13:39:27.708391    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:27.732070    8306 machine.go:93] provisionDockerMachine start ...
	I1014 13:39:27.732164    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:27.750053    8306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.750301    8306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.750310    8306 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 13:39:27.884455    8306 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-569374
	
	I1014 13:39:27.884480    8306 ubuntu.go:169] provisioning hostname "addons-569374"
	I1014 13:39:27.884547    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:27.906674    8306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:27.906921    8306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:27.906943    8306 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-569374 && echo "addons-569374" | sudo tee /etc/hostname
	I1014 13:39:28.052839    8306 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-569374
	
	I1014 13:39:28.052944    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.071207    8306 main.go:141] libmachine: Using SSH client type: native
	I1014 13:39:28.071483    8306 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1014 13:39:28.071513    8306 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-569374' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-569374/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-569374' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 13:39:28.208937    8306 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 13:39:28.208961    8306 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19790-2229/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-2229/.minikube}
	I1014 13:39:28.208992    8306 ubuntu.go:177] setting up certificates
	I1014 13:39:28.209002    8306 provision.go:84] configureAuth start
	I1014 13:39:28.209096    8306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-569374
	I1014 13:39:28.225802    8306 provision.go:143] copyHostCerts
	I1014 13:39:28.225886    8306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem (1082 bytes)
	I1014 13:39:28.226011    8306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem (1123 bytes)
	I1014 13:39:28.226080    8306 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem (1679 bytes)
	I1014 13:39:28.226132    8306 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem org=jenkins.addons-569374 san=[127.0.0.1 192.168.49.2 addons-569374 localhost minikube]
	I1014 13:39:28.471889    8306 provision.go:177] copyRemoteCerts
	I1014 13:39:28.471957    8306 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 13:39:28.472011    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.489516    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:28.582581    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1014 13:39:28.607764    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1014 13:39:28.631668    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 13:39:28.655293    8306 provision.go:87] duration metric: took 446.272403ms to configureAuth
	I1014 13:39:28.655322    8306 ubuntu.go:193] setting minikube options for container-runtime
	I1014 13:39:28.655513    8306 config.go:182] Loaded profile config "addons-569374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:39:28.655528    8306 machine.go:96] duration metric: took 923.440934ms to provisionDockerMachine
	I1014 13:39:28.655535    8306 client.go:171] duration metric: took 9.345558927s to LocalClient.Create
	I1014 13:39:28.655549    8306 start.go:167] duration metric: took 9.345622492s to libmachine.API.Create "addons-569374"
	I1014 13:39:28.655562    8306 start.go:293] postStartSetup for "addons-569374" (driver="docker")
	I1014 13:39:28.655572    8306 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 13:39:28.655622    8306 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 13:39:28.655667    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.672313    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:28.768517    8306 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 13:39:28.771634    8306 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 13:39:28.771671    8306 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1014 13:39:28.771694    8306 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1014 13:39:28.771705    8306 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1014 13:39:28.771717    8306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/addons for local assets ...
	I1014 13:39:28.771792    8306 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/files for local assets ...
	I1014 13:39:28.771819    8306 start.go:296] duration metric: took 116.249817ms for postStartSetup
	I1014 13:39:28.772126    8306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-569374
	I1014 13:39:28.788075    8306 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/config.json ...
	I1014 13:39:28.788368    8306 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:39:28.788420    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.806148    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:28.893706    8306 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 13:39:28.897939    8306 start.go:128] duration metric: took 9.590647647s to createHost
	I1014 13:39:28.897963    8306 start.go:83] releasing machines lock for "addons-569374", held for 9.590789144s
	I1014 13:39:28.898046    8306 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-569374
	I1014 13:39:28.915157    8306 ssh_runner.go:195] Run: cat /version.json
	I1014 13:39:28.915212    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.915452    8306 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 13:39:28.915518    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:28.933184    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:28.938717    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:29.150919    8306 ssh_runner.go:195] Run: systemctl --version
	I1014 13:39:29.155300    8306 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 13:39:29.159439    8306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1014 13:39:29.183929    8306 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1014 13:39:29.184007    8306 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 13:39:29.211549    8306 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1014 13:39:29.211622    8306 start.go:495] detecting cgroup driver to use...
	I1014 13:39:29.211670    8306 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 13:39:29.211751    8306 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 13:39:29.224212    8306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 13:39:29.235732    8306 docker.go:217] disabling cri-docker service (if available) ...
	I1014 13:39:29.235823    8306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 13:39:29.250073    8306 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 13:39:29.264280    8306 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 13:39:29.352898    8306 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 13:39:29.446756    8306 docker.go:233] disabling docker service ...
	I1014 13:39:29.446845    8306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 13:39:29.465901    8306 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 13:39:29.477880    8306 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 13:39:29.565131    8306 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 13:39:29.656309    8306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 13:39:29.667227    8306 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 13:39:29.683285    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 13:39:29.693508    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 13:39:29.703275    8306 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 13:39:29.703388    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 13:39:29.714194    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 13:39:29.723735    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 13:39:29.733691    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 13:39:29.743198    8306 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 13:39:29.752247    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 13:39:29.762407    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 13:39:29.771834    8306 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 13:39:29.781516    8306 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 13:39:29.790490    8306 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1014 13:39:29.790598    8306 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1014 13:39:29.804493    8306 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 13:39:29.813283    8306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:29.901030    8306 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 13:39:30.098492    8306 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1014 13:39:30.098691    8306 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1014 13:39:30.104663    8306 start.go:563] Will wait 60s for crictl version
	I1014 13:39:30.104762    8306 ssh_runner.go:195] Run: which crictl
	I1014 13:39:30.110135    8306 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 13:39:30.165753    8306 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1014 13:39:30.165912    8306 ssh_runner.go:195] Run: containerd --version
	I1014 13:39:30.189576    8306 ssh_runner.go:195] Run: containerd --version
	I1014 13:39:30.215754    8306 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1014 13:39:30.217673    8306 cli_runner.go:164] Run: docker network inspect addons-569374 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 13:39:30.235794    8306 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1014 13:39:30.240219    8306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.251648    8306 kubeadm.go:883] updating cluster {Name:addons-569374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-569374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 13:39:30.251768    8306 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 13:39:30.251836    8306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.289986    8306 containerd.go:627] all images are preloaded for containerd runtime.
	I1014 13:39:30.290012    8306 containerd.go:534] Images already preloaded, skipping extraction
	I1014 13:39:30.290072    8306 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 13:39:30.329093    8306 containerd.go:627] all images are preloaded for containerd runtime.
	I1014 13:39:30.329119    8306 cache_images.go:84] Images are preloaded, skipping loading
	I1014 13:39:30.329128    8306 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I1014 13:39:30.329219    8306 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-569374 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-569374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 13:39:30.329287    8306 ssh_runner.go:195] Run: sudo crictl info
	I1014 13:39:30.369430    8306 cni.go:84] Creating CNI manager for ""
	I1014 13:39:30.369454    8306 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 13:39:30.369463    8306 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 13:39:30.369485    8306 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-569374 NodeName:addons-569374 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 13:39:30.369633    8306 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-569374"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 13:39:30.369707    8306 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 13:39:30.378761    8306 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 13:39:30.378832    8306 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 13:39:30.387523    8306 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1014 13:39:30.405456    8306 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 13:39:30.423308    8306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I1014 13:39:30.441377    8306 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1014 13:39:30.444771    8306 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 13:39:30.455490    8306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:30.540401    8306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:30.555464    8306 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374 for IP: 192.168.49.2
	I1014 13:39:30.555526    8306 certs.go:194] generating shared ca certs ...
	I1014 13:39:30.555557    8306 certs.go:226] acquiring lock for ca certs: {Name:mk2a77364a9bb2b8250d1aa5761db5ebc543c9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:30.555701    8306 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key
	I1014 13:39:31.034473    8306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt ...
	I1014 13:39:31.034507    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt: {Name:mk646e956b07459225e14a66510e0c2a9f0106bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.034729    8306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key ...
	I1014 13:39:31.034744    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key: {Name:mk949acf3677eb6f7b6f55f305c5ea3bdaf5ffd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.034844    8306 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key
	I1014 13:39:31.458326    8306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.crt ...
	I1014 13:39:31.458361    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.crt: {Name:mkf4d4a250a014b2121657d4cd313f11d6066c1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.458551    8306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key ...
	I1014 13:39:31.458564    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key: {Name:mkd1beb594bb1a7f536f570c80c4c5ebc344bb69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:31.458647    8306 certs.go:256] generating profile certs ...
	I1014 13:39:31.458704    8306 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.key
	I1014 13:39:31.458725    8306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt with IP's: []
	I1014 13:39:32.091559    8306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt ...
	I1014 13:39:32.091592    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: {Name:mkef428e670292f6064d4172a1223e57da56d170 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.091796    8306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.key ...
	I1014 13:39:32.091811    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.key: {Name:mkbdc2d311147dc6da7713cbe17806a1fb7af5af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.091896    8306 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key.25d4a042
	I1014 13:39:32.091916    8306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt.25d4a042 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1014 13:39:32.724063    8306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt.25d4a042 ...
	I1014 13:39:32.724095    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt.25d4a042: {Name:mkb3261ea4331b99132ff0eb2b640c1d54438329 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.724310    8306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key.25d4a042 ...
	I1014 13:39:32.724328    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key.25d4a042: {Name:mk9a8f31ce568d7926014e0d4f4f89aebbd81c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:32.724439    8306 certs.go:381] copying /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt.25d4a042 -> /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt
	I1014 13:39:32.724534    8306 certs.go:385] copying /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key.25d4a042 -> /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key
	I1014 13:39:32.724589    8306 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.key
	I1014 13:39:32.724609    8306 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.crt with IP's: []
	I1014 13:39:33.120708    8306 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.crt ...
	I1014 13:39:33.120762    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.crt: {Name:mk787b601957237912e52b9cf2d2d31910e83d3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.121081    8306 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.key ...
	I1014 13:39:33.121100    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.key: {Name:mk4ecd4ede4b36be5b8935db293df03e8a6a0bdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:33.121353    8306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 13:39:33.121401    8306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem (1082 bytes)
	I1014 13:39:33.121443    8306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem (1123 bytes)
	I1014 13:39:33.121471    8306 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem (1679 bytes)
	I1014 13:39:33.122286    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 13:39:33.149253    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 13:39:33.174412    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 13:39:33.199693    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 13:39:33.224061    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1014 13:39:33.248059    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 13:39:33.272826    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 13:39:33.296809    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1014 13:39:33.322760    8306 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 13:39:33.347752    8306 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 13:39:33.365943    8306 ssh_runner.go:195] Run: openssl version
	I1014 13:39:33.371562    8306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 13:39:33.381148    8306 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.384501    8306 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.384566    8306 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 13:39:33.391589    8306 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 13:39:33.400533    8306 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 13:39:33.403662    8306 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1014 13:39:33.403707    8306 kubeadm.go:392] StartCluster: {Name:addons-569374 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-569374 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:39:33.403782    8306 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1014 13:39:33.403835    8306 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 13:39:33.440832    8306 cri.go:89] found id: ""
	I1014 13:39:33.440905    8306 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 13:39:33.449587    8306 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1014 13:39:33.458140    8306 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1014 13:39:33.458223    8306 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1014 13:39:33.468458    8306 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1014 13:39:33.468480    8306 kubeadm.go:157] found existing configuration files:
	
	I1014 13:39:33.468551    8306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1014 13:39:33.476922    8306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1014 13:39:33.476985    8306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1014 13:39:33.486269    8306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1014 13:39:33.496484    8306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1014 13:39:33.496586    8306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1014 13:39:33.508122    8306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.519041    8306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1014 13:39:33.519157    8306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1014 13:39:33.528783    8306 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1014 13:39:33.538814    8306 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1014 13:39:33.538930    8306 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1014 13:39:33.550222    8306 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1014 13:39:33.600300    8306 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1014 13:39:33.600638    8306 kubeadm.go:310] [preflight] Running pre-flight checks
	I1014 13:39:33.619870    8306 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1014 13:39:33.619946    8306 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I1014 13:39:33.619987    8306 kubeadm.go:310] OS: Linux
	I1014 13:39:33.620037    8306 kubeadm.go:310] CGROUPS_CPU: enabled
	I1014 13:39:33.620089    8306 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1014 13:39:33.620140    8306 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1014 13:39:33.620191    8306 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1014 13:39:33.620243    8306 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1014 13:39:33.620297    8306 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1014 13:39:33.620363    8306 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1014 13:39:33.620416    8306 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1014 13:39:33.620466    8306 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1014 13:39:33.682602    8306 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1014 13:39:33.682719    8306 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1014 13:39:33.682816    8306 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1014 13:39:33.688377    8306 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1014 13:39:33.691300    8306 out.go:235]   - Generating certificates and keys ...
	I1014 13:39:33.691401    8306 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1014 13:39:33.691472    8306 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1014 13:39:34.345395    8306 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1014 13:39:34.775687    8306 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1014 13:39:34.918771    8306 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1014 13:39:35.444409    8306 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1014 13:39:35.905436    8306 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1014 13:39:35.905568    8306 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-569374 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.356272    8306 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1014 13:39:36.356578    8306 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-569374 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1014 13:39:36.466384    8306 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1014 13:39:36.614675    8306 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1014 13:39:37.122777    8306 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1014 13:39:37.123021    8306 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1014 13:39:37.343702    8306 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1014 13:39:38.106425    8306 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1014 13:39:38.717787    8306 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1014 13:39:39.075020    8306 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1014 13:39:39.412749    8306 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1014 13:39:39.413863    8306 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1014 13:39:39.419629    8306 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1014 13:39:39.422086    8306 out.go:235]   - Booting up control plane ...
	I1014 13:39:39.422194    8306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1014 13:39:39.422283    8306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1014 13:39:39.423588    8306 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1014 13:39:39.442393    8306 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1014 13:39:39.448863    8306 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1014 13:39:39.448923    8306 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1014 13:39:39.549551    8306 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1014 13:39:39.549680    8306 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1014 13:39:41.048304    8306 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501939359s
	I1014 13:39:41.048429    8306 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1014 13:39:47.049465    8306 kubeadm.go:310] [api-check] The API server is healthy after 6.001407669s
	I1014 13:39:47.069154    8306 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1014 13:39:47.085573    8306 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1014 13:39:47.113262    8306 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1014 13:39:47.113463    8306 kubeadm.go:310] [mark-control-plane] Marking the node addons-569374 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1014 13:39:47.123469    8306 kubeadm.go:310] [bootstrap-token] Using token: 5km7au.cobkpz68fgz1s1an
	I1014 13:39:47.125618    8306 out.go:235]   - Configuring RBAC rules ...
	I1014 13:39:47.125845    8306 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1014 13:39:47.132919    8306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1014 13:39:47.141866    8306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1014 13:39:47.145885    8306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1014 13:39:47.149905    8306 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1014 13:39:47.153345    8306 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1014 13:39:47.456880    8306 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1014 13:39:47.887411    8306 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1014 13:39:48.456326    8306 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1014 13:39:48.457485    8306 kubeadm.go:310] 
	I1014 13:39:48.457557    8306 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1014 13:39:48.457563    8306 kubeadm.go:310] 
	I1014 13:39:48.457639    8306 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1014 13:39:48.457643    8306 kubeadm.go:310] 
	I1014 13:39:48.457669    8306 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1014 13:39:48.457726    8306 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1014 13:39:48.457776    8306 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1014 13:39:48.457781    8306 kubeadm.go:310] 
	I1014 13:39:48.457843    8306 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1014 13:39:48.457852    8306 kubeadm.go:310] 
	I1014 13:39:48.457898    8306 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1014 13:39:48.457903    8306 kubeadm.go:310] 
	I1014 13:39:48.457954    8306 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1014 13:39:48.458027    8306 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1014 13:39:48.458094    8306 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1014 13:39:48.458099    8306 kubeadm.go:310] 
	I1014 13:39:48.458181    8306 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1014 13:39:48.458256    8306 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1014 13:39:48.458261    8306 kubeadm.go:310] 
	I1014 13:39:48.458342    8306 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5km7au.cobkpz68fgz1s1an \
	I1014 13:39:48.458448    8306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ced76b34e4079bf731359f44142194e5dd7b51650f562c64f0fb574075833da4 \
	I1014 13:39:48.458469    8306 kubeadm.go:310] 	--control-plane 
	I1014 13:39:48.458474    8306 kubeadm.go:310] 
	I1014 13:39:48.458557    8306 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1014 13:39:48.458561    8306 kubeadm.go:310] 
	I1014 13:39:48.458641    8306 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5km7au.cobkpz68fgz1s1an \
	I1014 13:39:48.458740    8306 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:ced76b34e4079bf731359f44142194e5dd7b51650f562c64f0fb574075833da4 
	I1014 13:39:48.462491    8306 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I1014 13:39:48.462622    8306 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1014 13:39:48.462661    8306 cni.go:84] Creating CNI manager for ""
	I1014 13:39:48.462674    8306 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 13:39:48.465392    8306 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1014 13:39:48.467202    8306 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1014 13:39:48.470968    8306 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1014 13:39:48.470992    8306 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1014 13:39:48.490546    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1014 13:39:48.772113    8306 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1014 13:39:48.772251    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:48.772327    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-569374 minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b minikube.k8s.io/name=addons-569374 minikube.k8s.io/primary=true
	I1014 13:39:48.922173    8306 ops.go:34] apiserver oom_adj: -16
	I1014 13:39:48.930010    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.430199    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:49.930516    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.430054    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:50.930292    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.430597    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:51.930673    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.430727    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:52.930325    8306 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1014 13:39:53.088164    8306 kubeadm.go:1113] duration metric: took 4.315968692s to wait for elevateKubeSystemPrivileges
	I1014 13:39:53.088189    8306 kubeadm.go:394] duration metric: took 19.684485173s to StartCluster
	I1014 13:39:53.088205    8306 settings.go:142] acquiring lock: {Name:mk7dda8238a0606dcfbe3db5d257a14d7d308979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:53.088317    8306 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:39:53.088748    8306 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/kubeconfig: {Name:mk7703bee112acb0d700fbfe8aa7245ea0dd07d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 13:39:53.088937    8306 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1014 13:39:53.089148    8306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1014 13:39:53.089393    8306 config.go:182] Loaded profile config "addons-569374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:39:53.089440    8306 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1014 13:39:53.089518    8306 addons.go:69] Setting yakd=true in profile "addons-569374"
	I1014 13:39:53.089537    8306 addons.go:234] Setting addon yakd=true in "addons-569374"
	I1014 13:39:53.089564    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.090066    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.090589    8306 addons.go:69] Setting metrics-server=true in profile "addons-569374"
	I1014 13:39:53.090607    8306 addons.go:234] Setting addon metrics-server=true in "addons-569374"
	I1014 13:39:53.090632    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.091043    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.091328    8306 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-569374"
	I1014 13:39:53.091346    8306 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-569374"
	I1014 13:39:53.091369    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.091768    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.099223    8306 addons.go:69] Setting registry=true in profile "addons-569374"
	I1014 13:39:53.099261    8306 addons.go:234] Setting addon registry=true in "addons-569374"
	I1014 13:39:53.099297    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.099753    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.099933    8306 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-569374"
	I1014 13:39:53.099955    8306 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-569374"
	I1014 13:39:53.099988    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.100410    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.110325    8306 addons.go:69] Setting storage-provisioner=true in profile "addons-569374"
	I1014 13:39:53.110360    8306 addons.go:234] Setting addon storage-provisioner=true in "addons-569374"
	I1014 13:39:53.110397    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.110861    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.113554    8306 addons.go:69] Setting cloud-spanner=true in profile "addons-569374"
	I1014 13:39:53.113586    8306 addons.go:234] Setting addon cloud-spanner=true in "addons-569374"
	I1014 13:39:53.113625    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.114084    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.127213    8306 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-569374"
	I1014 13:39:53.127295    8306 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-569374"
	I1014 13:39:53.127333    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.128007    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.131680    8306 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-569374"
	I1014 13:39:53.132744    8306 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-569374"
	I1014 13:39:53.137884    8306 addons.go:69] Setting volcano=true in profile "addons-569374"
	I1014 13:39:53.163060    8306 addons.go:234] Setting addon volcano=true in "addons-569374"
	I1014 13:39:53.137900    8306 addons.go:69] Setting volumesnapshots=true in profile "addons-569374"
	I1014 13:39:53.137957    8306 out.go:177] * Verifying Kubernetes components...
	I1014 13:39:53.143006    8306 addons.go:69] Setting default-storageclass=true in profile "addons-569374"
	I1014 13:39:53.143018    8306 addons.go:69] Setting gcp-auth=true in profile "addons-569374"
	I1014 13:39:53.143027    8306 addons.go:69] Setting ingress=true in profile "addons-569374"
	I1014 13:39:53.143032    8306 addons.go:69] Setting ingress-dns=true in profile "addons-569374"
	I1014 13:39:53.143035    8306 addons.go:69] Setting inspektor-gadget=true in profile "addons-569374"
	I1014 13:39:53.163527    8306 addons.go:234] Setting addon inspektor-gadget=true in "addons-569374"
	I1014 13:39:53.163745    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.167060    8306 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-569374"
	I1014 13:39:53.168077    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.173311    8306 mustload.go:65] Loading cluster: addons-569374
	I1014 13:39:53.173739    8306 config.go:182] Loaded profile config "addons-569374": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:39:53.182259    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.184399    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.184941    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.173942    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.204651    8306 addons.go:234] Setting addon volumesnapshots=true in "addons-569374"
	I1014 13:39:53.204742    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.173956    8306 addons.go:234] Setting addon ingress=true in "addons-569374"
	I1014 13:39:53.205288    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.205831    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.206294    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.173968    8306 addons.go:234] Setting addon ingress-dns=true in "addons-569374"
	I1014 13:39:53.231564    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.232072    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.234916    8306 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 13:39:53.238244    8306 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1014 13:39:53.174353    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.246739    8306 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1014 13:39:53.246761    8306 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1014 13:39:53.246826    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.263637    8306 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1014 13:39:53.290579    8306 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1014 13:39:53.290851    8306 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1014 13:39:53.309107    8306 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 13:39:53.291178    8306 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1014 13:39:53.310567    8306 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1014 13:39:53.330744    8306 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:53.330817    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1014 13:39:53.330916    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.310580    8306 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 13:39:53.335090    8306 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 13:39:53.335189    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.348302    8306 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:53.348376    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1014 13:39:53.348484    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.312673    8306 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1014 13:39:53.370894    8306 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:53.370977    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1014 13:39:53.371068    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.392634    8306 out.go:177]   - Using image docker.io/registry:2.8.3
	I1014 13:39:53.399497    8306 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1014 13:39:53.399571    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1014 13:39:53.399655    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.403537    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1014 13:39:53.406168    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1014 13:39:53.408395    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1014 13:39:53.410358    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1014 13:39:53.414478    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1014 13:39:53.415966    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.419793    8306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:53.419814    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 13:39:53.419889    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.457095    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1014 13:39:53.457537    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1014 13:39:53.457632    8306 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I1014 13:39:53.458803    8306 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-569374"
	I1014 13:39:53.459607    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.460110    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.466336    8306 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1014 13:39:53.459341    8306 addons.go:234] Setting addon default-storageclass=true in "addons-569374"
	I1014 13:39:53.466585    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:39:53.467102    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:39:53.475898    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1014 13:39:53.475921    8306 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1014 13:39:53.475996    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.486475    8306 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.487032    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1014 13:39:53.487341    8306 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I1014 13:39:53.504196    8306 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I1014 13:39:53.504879    8306 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:39:53.505086    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.505155    8306 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1014 13:39:53.506339    8306 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 13:39:53.515321    8306 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1014 13:39:53.516090    8306 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:53.516106    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1014 13:39:53.516160    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.516521    8306 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:53.516533    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1014 13:39:53.516575    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.537866    8306 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1014 13:39:53.545291    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1014 13:39:53.545324    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1014 13:39:53.545389    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.546055    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.551873    8306 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1014 13:39:53.551896    8306 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1014 13:39:53.551956    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.570993    8306 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I1014 13:39:53.571019    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I1014 13:39:53.571087    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.585302    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.597379    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.651080    8306 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1014 13:39:53.655594    8306 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:53.655617    8306 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 13:39:53.655677    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.656771    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.657910    8306 out.go:177]   - Using image docker.io/busybox:stable
	I1014 13:39:53.665808    8306 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:53.665834    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1014 13:39:53.665898    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:39:53.672820    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.689831    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.700573    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.745311    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.753696    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.762454    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.770197    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.783060    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.786000    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	W1014 13:39:53.788460    8306 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1014 13:39:53.788494    8306 retry.go:31] will retry after 222.82263ms: ssh: handshake failed: EOF
	I1014 13:39:53.790616    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:39:53.955808    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1014 13:39:54.041426    8306 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1014 13:39:54.041454    8306 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1014 13:39:54.199805    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1014 13:39:54.307320    8306 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1014 13:39:54.307344    8306 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1014 13:39:54.372305    8306 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:54.372387    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1014 13:39:54.378411    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1014 13:39:54.397752    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1014 13:39:54.436248    8306 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 13:39:54.436275    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1014 13:39:54.537639    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 13:39:54.541359    8306 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:54.541384    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1014 13:39:54.547868    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 13:39:54.565306    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1014 13:39:54.565333    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1014 13:39:54.568536    8306 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1014 13:39:54.568560    8306 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1014 13:39:54.603156    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1014 13:39:54.603182    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1014 13:39:54.672927    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1014 13:39:54.672953    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1014 13:39:54.704566    8306 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 13:39:54.704592    8306 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 13:39:54.755141    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1014 13:39:54.828291    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1014 13:39:54.835706    8306 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1014 13:39:54.835734    8306 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1014 13:39:54.836145    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1014 13:39:54.851078    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I1014 13:39:54.885291    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1014 13:39:54.938243    8306 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.674573864s)
	I1014 13:39:54.938272    8306 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1014 13:39:54.939314    8306 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.432913667s)
	I1014 13:39:54.940018    8306 node_ready.go:35] waiting up to 6m0s for node "addons-569374" to be "Ready" ...
	I1014 13:39:54.944958    8306 node_ready.go:49] node "addons-569374" has status "Ready":"True"
	I1014 13:39:54.944986    8306 node_ready.go:38] duration metric: took 4.941065ms for node "addons-569374" to be "Ready" ...
	I1014 13:39:54.944996    8306 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:39:54.965423    8306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace to be "Ready" ...
	I1014 13:39:55.008001    8306 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1014 13:39:55.008031    8306 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1014 13:39:55.130759    8306 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:55.130834    8306 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 13:39:55.161019    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.205127107s)
	I1014 13:39:55.161464    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1014 13:39:55.161484    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1014 13:39:55.168885    8306 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1014 13:39:55.168914    8306 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1014 13:39:55.324302    8306 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1014 13:39:55.324328    8306 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1014 13:39:55.442635    8306 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-569374" context rescaled to 1 replicas
	I1014 13:39:55.467015    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1014 13:39:55.467038    8306 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1014 13:39:55.473528    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 13:39:55.532424    8306 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1014 13:39:55.532450    8306 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1014 13:39:55.675640    8306 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:55.675666    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1014 13:39:55.715169    8306 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:55.715195    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1014 13:39:56.052421    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1014 13:39:56.052507    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1014 13:39:56.076580    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1014 13:39:56.394844    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:39:56.511988    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1014 13:39:56.512063    8306 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1014 13:39:56.752742    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1014 13:39:56.752815    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1014 13:39:56.972899    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:57.068094    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1014 13:39:57.068176    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1014 13:39:57.373844    8306 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:57.373924    8306 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1014 13:39:57.799197    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1014 13:39:58.802740    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.60290216s)
	I1014 13:39:58.802998    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.424516923s)
	I1014 13:39:58.803040    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.405220466s)
	I1014 13:39:58.803081    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.265420717s)
	I1014 13:39:58.803233    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.255344061s)
	I1014 13:39:58.803292    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.048126786s)
	W1014 13:39:58.814033    8306 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1014 13:39:58.990488    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:39:59.727676    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.891493444s)
	I1014 13:39:59.727722    8306 addons.go:475] Verifying addon registry=true in "addons-569374"
	I1014 13:39:59.727963    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.899644164s)
	I1014 13:39:59.730662    8306 out.go:177] * Verifying registry addon...
	I1014 13:39:59.733512    8306 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1014 13:39:59.739703    8306 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1014 13:39:59.739738    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.308630    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.663209    8306 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1014 13:40:00.663426    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:40:00.726265    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:40:00.791034    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:00.998362    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:01.258179    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:01.412516    8306 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1014 13:40:01.494945    8306 addons.go:234] Setting addon gcp-auth=true in "addons-569374"
	I1014 13:40:01.495044    8306 host.go:66] Checking if "addons-569374" exists ...
	I1014 13:40:01.495628    8306 cli_runner.go:164] Run: docker container inspect addons-569374 --format={{.State.Status}}
	I1014 13:40:01.527144    8306 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1014 13:40:01.527197    8306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-569374
	I1014 13:40:01.559723    8306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/addons-569374/id_rsa Username:docker}
	I1014 13:40:01.738236    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.238159    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:02.738723    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.242195    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:03.503131    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:03.740987    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.261869    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.590149    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.738924468s)
	I1014 13:40:04.590349    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.704983768s)
	I1014 13:40:04.590407    8306 addons.go:475] Verifying addon ingress=true in "addons-569374"
	I1014 13:40:04.590446    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.11688892s)
	I1014 13:40:04.590473    8306 addons.go:475] Verifying addon metrics-server=true in "addons-569374"
	I1014 13:40:04.590514    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.513862829s)
	I1014 13:40:04.590915    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.195983662s)
	W1014 13:40:04.590953    8306 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:40:04.590970    8306 retry.go:31] will retry after 227.718374ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1014 13:40:04.593699    8306 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-569374 service yakd-dashboard -n yakd-dashboard
	
	I1014 13:40:04.593855    8306 out.go:177] * Verifying ingress addon...
	I1014 13:40:04.597131    8306 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1014 13:40:04.641575    8306 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1014 13:40:04.641613    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:04.797352    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:04.820153    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1014 13:40:05.111715    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.248239    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.365924    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.566605254s)
	I1014 13:40:05.366028    8306 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.838861291s)
	I1014 13:40:05.366210    8306 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-569374"
	I1014 13:40:05.367928    8306 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1014 13:40:05.368021    8306 out.go:177] * Verifying csi-hostpath-driver addon...
	I1014 13:40:05.378540    8306 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1014 13:40:05.379265    8306 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1014 13:40:05.381138    8306 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1014 13:40:05.381160    8306 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1014 13:40:05.398743    8306 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1014 13:40:05.398818    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.451185    8306 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1014 13:40:05.451261    8306 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1014 13:40:05.527515    8306 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:05.527587    8306 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1014 13:40:05.579938    8306 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1014 13:40:05.602271    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:05.740315    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:05.884389    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:05.972026    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:06.101498    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.237532    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.325483    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.505284263s)
	I1014 13:40:06.384876    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:06.609155    8306 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.029114993s)
	I1014 13:40:06.612573    8306 addons.go:475] Verifying addon gcp-auth=true in "addons-569374"
	I1014 13:40:06.614038    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:06.615100    8306 out.go:177] * Verifying gcp-auth addon...
	I1014 13:40:06.617765    8306 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1014 13:40:06.709551    8306 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:40:06.810614    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:06.912276    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.101273    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.238152    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.383636    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.601732    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:07.738105    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:07.885192    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:07.972156    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:08.101361    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.238116    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.384699    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:08.602304    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:08.738709    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:08.886015    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.108993    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.237517    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.384630    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:09.602344    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:09.737612    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:09.884499    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.104966    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.238153    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.383826    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:10.471634    8306 pod_ready.go:103] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"False"
	I1014 13:40:10.601874    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:10.737188    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:10.884866    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.103985    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.237705    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.385177    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:11.471842    8306 pod_ready.go:93] pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.471867    8306 pod_ready.go:82] duration metric: took 16.506405454s for pod "coredns-7c65d6cfc9-d82ng" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.471878    8306 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-frkp4" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.473822    8306 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-frkp4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-frkp4" not found
	I1014 13:40:11.473847    8306 pod_ready.go:82] duration metric: took 1.961661ms for pod "coredns-7c65d6cfc9-frkp4" in "kube-system" namespace to be "Ready" ...
	E1014 13:40:11.473859    8306 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-frkp4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-frkp4" not found
	I1014 13:40:11.473866    8306 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.478714    8306 pod_ready.go:93] pod "etcd-addons-569374" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.478737    8306 pod_ready.go:82] duration metric: took 4.863041ms for pod "etcd-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.478751    8306 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.483975    8306 pod_ready.go:93] pod "kube-apiserver-addons-569374" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.484052    8306 pod_ready.go:82] duration metric: took 5.292512ms for pod "kube-apiserver-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.484072    8306 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.489429    8306 pod_ready.go:93] pod "kube-controller-manager-addons-569374" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.489455    8306 pod_ready.go:82] duration metric: took 5.373003ms for pod "kube-controller-manager-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.489467    8306 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kr2zj" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.602270    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:11.671242    8306 pod_ready.go:93] pod "kube-proxy-kr2zj" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:11.671267    8306 pod_ready.go:82] duration metric: took 181.792379ms for pod "kube-proxy-kr2zj" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.671279    8306 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:11.738550    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:11.884318    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.069982    8306 pod_ready.go:93] pod "kube-scheduler-addons-569374" in "kube-system" namespace has status "Ready":"True"
	I1014 13:40:12.070010    8306 pod_ready.go:82] duration metric: took 398.72162ms for pod "kube-scheduler-addons-569374" in "kube-system" namespace to be "Ready" ...
	I1014 13:40:12.070021    8306 pod_ready.go:39] duration metric: took 17.125013466s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 13:40:12.070036    8306 api_server.go:52] waiting for apiserver process to appear ...
	I1014 13:40:12.070104    8306 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:40:12.083644    8306 api_server.go:72] duration metric: took 18.994679754s to wait for apiserver process to appear ...
	I1014 13:40:12.083672    8306 api_server.go:88] waiting for apiserver healthz status ...
	I1014 13:40:12.083694    8306 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1014 13:40:12.091357    8306 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1014 13:40:12.092416    8306 api_server.go:141] control plane version: v1.31.1
	I1014 13:40:12.092441    8306 api_server.go:131] duration metric: took 8.76192ms to wait for apiserver health ...
	I1014 13:40:12.092451    8306 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 13:40:12.101500    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.237897    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.276246    8306 system_pods.go:59] 18 kube-system pods found
	I1014 13:40:12.276285    8306 system_pods.go:61] "coredns-7c65d6cfc9-d82ng" [d411bedc-94bf-41cf-ac5e-cf94a395d87a] Running
	I1014 13:40:12.276296    8306 system_pods.go:61] "csi-hostpath-attacher-0" [b5783f59-72ff-499e-b7b8-4bd13e2f64f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:40:12.276305    8306 system_pods.go:61] "csi-hostpath-resizer-0" [f348ac2b-3164-408b-acff-98577ef9325b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 13:40:12.276313    8306 system_pods.go:61] "csi-hostpathplugin-97829" [6ad9e6ce-8c4d-4ebc-babd-edd173146126] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:40:12.276318    8306 system_pods.go:61] "etcd-addons-569374" [e0e79c0e-8a70-41e8-aac4-beb967fb57ed] Running
	I1014 13:40:12.276323    8306 system_pods.go:61] "kindnet-kp25n" [33113d5b-02fc-47c8-88ce-38c77011c3cd] Running
	I1014 13:40:12.276334    8306 system_pods.go:61] "kube-apiserver-addons-569374" [070e00f4-ae86-4780-8cfd-171ae5b883de] Running
	I1014 13:40:12.276339    8306 system_pods.go:61] "kube-controller-manager-addons-569374" [ef10a865-6a14-441c-84d8-ac6f265db101] Running
	I1014 13:40:12.276350    8306 system_pods.go:61] "kube-ingress-dns-minikube" [57b76ee6-c5bc-4f3a-a2f0-59c73535de94] Running
	I1014 13:40:12.276354    8306 system_pods.go:61] "kube-proxy-kr2zj" [a410425a-2a3c-4c11-b8c1-64827f37062b] Running
	I1014 13:40:12.276368    8306 system_pods.go:61] "kube-scheduler-addons-569374" [a653756c-595d-419a-911e-32e5c91494a5] Running
	I1014 13:40:12.276378    8306 system_pods.go:61] "metrics-server-84c5f94fbc-7jpwg" [7fa7d567-36e6-474d-89c2-177f4ac21f68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:40:12.276386    8306 system_pods.go:61] "nvidia-device-plugin-daemonset-spwkr" [3a139f01-b9d1-463a-ad5c-3a03da931e90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:40:12.276393    8306 system_pods.go:61] "registry-66c9cd494c-zcf42" [6251ea02-1362-47e1-ac9b-c623c958b8ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:40:12.276402    8306 system_pods.go:61] "registry-proxy-kcr2s" [2653d442-3145-422a-9e46-69f2ced9ccf9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:40:12.276408    8306 system_pods.go:61] "snapshot-controller-56fcc65765-4hxrk" [a9f2c12c-ea27-4fc6-80eb-2956963d7a33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:40:12.276416    8306 system_pods.go:61] "snapshot-controller-56fcc65765-q7drz" [10a630ce-317b-4439-92be-fa6d69557f14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:40:12.276421    8306 system_pods.go:61] "storage-provisioner" [2a706dcf-ded0-4cee-939a-670bde5e5c6c] Running
	I1014 13:40:12.276430    8306 system_pods.go:74] duration metric: took 183.97353ms to wait for pod list to return data ...
	I1014 13:40:12.276444    8306 default_sa.go:34] waiting for default service account to be created ...
	I1014 13:40:12.384700    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:12.469765    8306 default_sa.go:45] found service account: "default"
	I1014 13:40:12.469791    8306 default_sa.go:55] duration metric: took 193.340311ms for default service account to be created ...
	I1014 13:40:12.469802    8306 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 13:40:12.601894    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:12.676212    8306 system_pods.go:86] 18 kube-system pods found
	I1014 13:40:12.676254    8306 system_pods.go:89] "coredns-7c65d6cfc9-d82ng" [d411bedc-94bf-41cf-ac5e-cf94a395d87a] Running
	I1014 13:40:12.676264    8306 system_pods.go:89] "csi-hostpath-attacher-0" [b5783f59-72ff-499e-b7b8-4bd13e2f64f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1014 13:40:12.676271    8306 system_pods.go:89] "csi-hostpath-resizer-0" [f348ac2b-3164-408b-acff-98577ef9325b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1014 13:40:12.676280    8306 system_pods.go:89] "csi-hostpathplugin-97829" [6ad9e6ce-8c4d-4ebc-babd-edd173146126] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1014 13:40:12.676285    8306 system_pods.go:89] "etcd-addons-569374" [e0e79c0e-8a70-41e8-aac4-beb967fb57ed] Running
	I1014 13:40:12.676290    8306 system_pods.go:89] "kindnet-kp25n" [33113d5b-02fc-47c8-88ce-38c77011c3cd] Running
	I1014 13:40:12.676294    8306 system_pods.go:89] "kube-apiserver-addons-569374" [070e00f4-ae86-4780-8cfd-171ae5b883de] Running
	I1014 13:40:12.676300    8306 system_pods.go:89] "kube-controller-manager-addons-569374" [ef10a865-6a14-441c-84d8-ac6f265db101] Running
	I1014 13:40:12.676311    8306 system_pods.go:89] "kube-ingress-dns-minikube" [57b76ee6-c5bc-4f3a-a2f0-59c73535de94] Running
	I1014 13:40:12.676316    8306 system_pods.go:89] "kube-proxy-kr2zj" [a410425a-2a3c-4c11-b8c1-64827f37062b] Running
	I1014 13:40:12.676326    8306 system_pods.go:89] "kube-scheduler-addons-569374" [a653756c-595d-419a-911e-32e5c91494a5] Running
	I1014 13:40:12.676333    8306 system_pods.go:89] "metrics-server-84c5f94fbc-7jpwg" [7fa7d567-36e6-474d-89c2-177f4ac21f68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 13:40:12.676344    8306 system_pods.go:89] "nvidia-device-plugin-daemonset-spwkr" [3a139f01-b9d1-463a-ad5c-3a03da931e90] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1014 13:40:12.676366    8306 system_pods.go:89] "registry-66c9cd494c-zcf42" [6251ea02-1362-47e1-ac9b-c623c958b8ea] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1014 13:40:12.676373    8306 system_pods.go:89] "registry-proxy-kcr2s" [2653d442-3145-422a-9e46-69f2ced9ccf9] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1014 13:40:12.676383    8306 system_pods.go:89] "snapshot-controller-56fcc65765-4hxrk" [a9f2c12c-ea27-4fc6-80eb-2956963d7a33] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:40:12.676389    8306 system_pods.go:89] "snapshot-controller-56fcc65765-q7drz" [10a630ce-317b-4439-92be-fa6d69557f14] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1014 13:40:12.676393    8306 system_pods.go:89] "storage-provisioner" [2a706dcf-ded0-4cee-939a-670bde5e5c6c] Running
	I1014 13:40:12.676403    8306 system_pods.go:126] duration metric: took 206.593599ms to wait for k8s-apps to be running ...
	I1014 13:40:12.676419    8306 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 13:40:12.676477    8306 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:40:12.690834    8306 system_svc.go:56] duration metric: took 14.406626ms WaitForService to wait for kubelet
	I1014 13:40:12.690861    8306 kubeadm.go:582] duration metric: took 19.601901291s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 13:40:12.690879    8306 node_conditions.go:102] verifying NodePressure condition ...
	I1014 13:40:12.737777    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:12.869690    8306 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 13:40:12.869726    8306 node_conditions.go:123] node cpu capacity is 2
	I1014 13:40:12.869739    8306 node_conditions.go:105] duration metric: took 178.85399ms to run NodePressure ...
	I1014 13:40:12.869751    8306 start.go:241] waiting for startup goroutines ...
	I1014 13:40:12.885138    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.103161    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.237968    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.384236    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:13.601817    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:13.737276    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:13.884933    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.102536    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.238058    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.384540    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:14.603155    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:14.737849    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:14.885917    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.104438    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.238051    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.384701    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:15.601551    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:15.737795    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:15.885091    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.101125    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.302407    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.385004    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:16.602144    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:16.737708    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:16.884128    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.103321    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.239043    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.384580    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:17.601916    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:17.740309    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:17.885161    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.101725    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.238533    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.384249    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:18.602805    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:18.740605    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:18.886974    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.102674    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.238945    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.385903    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:19.602298    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:19.740594    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:19.886612    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.106709    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.239000    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.384980    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:20.602416    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:20.738402    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:20.884808    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.102490    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.237299    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.393531    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:21.607020    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:21.806564    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:21.885423    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.101576    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.265739    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.399907    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:22.607363    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:22.738924    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1014 13:40:22.884672    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.102340    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.239096    8306 kapi.go:107] duration metric: took 23.505583938s to wait for kubernetes.io/minikube-addons=registry ...
	I1014 13:40:23.384290    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:23.603233    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:23.885145    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.102240    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.384769    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:24.602690    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:24.886998    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.102874    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.384665    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:25.602171    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:25.884618    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.107023    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.385464    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:26.602044    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:26.885395    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.101528    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.385398    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:27.602569    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:27.885504    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.143768    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.386516    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:28.601850    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:28.885250    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.102026    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.384534    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:29.601750    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:29.885298    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.122116    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.390012    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:30.602250    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:30.884469    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.102735    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.385863    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:31.604657    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:31.885553    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.101826    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.387203    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:32.601814    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:32.886183    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.101738    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.384606    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:33.606579    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:33.885322    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.102236    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.384314    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:34.602408    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:34.886579    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.105742    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.386917    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:35.603207    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:35.884469    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.102937    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.384090    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:36.601822    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:36.884282    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.102466    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.383612    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:37.603013    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:37.884409    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.101730    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.384371    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:38.602008    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:38.883988    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.106674    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.384932    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:39.602026    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:39.886402    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.102830    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.384275    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:40.605580    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:40.884707    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.101279    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.395797    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:41.602362    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:41.884980    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.102333    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.385264    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:42.601623    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:42.884600    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.101669    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.383683    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:43.602805    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:43.886094    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.104436    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.385082    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:44.601852    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:44.887396    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.109135    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.385513    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:45.601996    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:45.884566    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.102299    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.384071    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:46.601956    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:46.884420    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.101796    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.384471    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:47.602685    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:47.884566    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.103888    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.385020    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:48.602117    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:48.885604    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.101670    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.395178    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:49.602273    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:49.884084    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.103949    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.383730    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:50.601950    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:50.884778    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.103082    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.384729    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:51.602102    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:51.885210    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.105386    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.384549    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:52.603321    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:52.885567    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.102678    8306 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1014 13:40:53.390192    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:53.602664    8306 kapi.go:107] duration metric: took 49.0055313s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1014 13:40:53.884118    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.384013    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:54.885549    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.384347    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:55.884401    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.385728    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:56.884100    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.385020    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:57.883987    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.393955    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:58.884122    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.384813    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:40:59.886266    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.388901    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1014 13:41:00.884241    8306 kapi.go:107] duration metric: took 55.504973386s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1014 13:41:29.621988    8306 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1014 13:41:29.622018    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:30.123127    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:30.621826    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:31.121602    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:31.629564    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:32.126745    8306 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1014 13:41:32.624847    8306 kapi.go:107] duration metric: took 1m26.007079004s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1014 13:41:32.626898    8306 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-569374 cluster.
	I1014 13:41:32.629360    8306 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1014 13:41:32.631517    8306 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1014 13:41:32.633359    8306 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, default-storageclass, inspektor-gadget, volcano, metrics-server, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1014 13:41:32.635355    8306 addons.go:510] duration metric: took 1m39.545914481s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns default-storageclass inspektor-gadget volcano metrics-server yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1014 13:41:32.635414    8306 start.go:246] waiting for cluster config update ...
	I1014 13:41:32.635439    8306 start.go:255] writing updated cluster config ...
	I1014 13:41:32.635731    8306 ssh_runner.go:195] Run: rm -f paused
	I1014 13:41:33.022566    8306 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 13:41:33.024713    8306 out.go:177] * Done! kubectl is now configured to use "addons-569374" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	03a14bceaf7ec       9c8d328e7d9e8       3 minutes ago       Running             gcp-auth                                 0                   3b069bcc9f4c5       gcp-auth-c684cb797-nngwz
	03e7707d91e71       ee6d597e62dc8       3 minutes ago       Running             csi-snapshotter                          0                   a7a85c710e388       csi-hostpathplugin-97829
	cfd0d12bc2aad       642ded511e141       3 minutes ago       Running             csi-provisioner                          0                   a7a85c710e388       csi-hostpathplugin-97829
	fd2dff320e9f6       922312104da8a       3 minutes ago       Running             liveness-probe                           0                   a7a85c710e388       csi-hostpathplugin-97829
	ef3fab2633a0b       08f6b2990811a       3 minutes ago       Running             hostpath                                 0                   a7a85c710e388       csi-hostpathplugin-97829
	770c84968a1c8       0107d56dbc0be       3 minutes ago       Running             node-driver-registrar                    0                   a7a85c710e388       csi-hostpathplugin-97829
	7e5d08b2fdbdc       1a9605c872c1d       3 minutes ago       Running             admission                                0                   c7208aa66664d       volcano-admission-5874dfdd79-cg9fd
	10f790ddd2315       2d37f5a3dd01b       3 minutes ago       Running             controller                               0                   63f83a0215922       ingress-nginx-controller-5f85ff4588-w6xlh
	09816d0656ad8       a9bac31a5be8d       4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   4ec4df423f792       nvidia-device-plugin-daemonset-spwkr
	7ccbd83ece159       6aa88c604f2b4       4 minutes ago       Running             volcano-scheduler                        0                   2d5682c68613b       volcano-scheduler-6c9778cbdf-9mm8c
	34f52779d078d       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   843eceb787ff6       csi-hostpath-attacher-0
	a0cbbbcebce53       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   9aed6e83f09de       csi-hostpath-resizer-0
	f031ac652a118       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   a7a85c710e388       csi-hostpathplugin-97829
	0bfba2fa1d045       77bdba588b953       4 minutes ago       Running             yakd                                     0                   a8034335ddb3c       yakd-dashboard-67d98fc6b-mdcc8
	68f69550b30ce       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   95e43a0578a1e       snapshot-controller-56fcc65765-q7drz
	f8802d5f53616       68de1ddeaded8       4 minutes ago       Running             gadget                                   0                   ee49f5b6b67b7       gadget-5g42s
	6455124c03b99       23cbb28ae641a       4 minutes ago       Running             volcano-controllers                      0                   69c60c479b500       volcano-controllers-789ffc5785-wbnfl
	3d02304d0c490       d54655ed3a854       4 minutes ago       Exited              patch                                    1                   db6f613ffbd7f       ingress-nginx-admission-patch-fnpqk
	a6d836dff29e8       d54655ed3a854       4 minutes ago       Exited              create                                   0                   a1b35159ecd68       ingress-nginx-admission-create-8gqvc
	af1e9777e6c48       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   f618e22759006       local-path-provisioner-86d989889c-whxvs
	9097696fe4807       c9cf76bb104e1       4 minutes ago       Running             registry                                 0                   427a8ed289637       registry-66c9cd494c-zcf42
	bd99d09da3d78       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   c171384c51156       snapshot-controller-56fcc65765-4hxrk
	41f2d8022fffe       5548a49bb60ba       4 minutes ago       Running             metrics-server                           0                   f369ced908c47       metrics-server-84c5f94fbc-7jpwg
	8f1b3bdbc7d28       434d64ac3dbf3       4 minutes ago       Running             registry-proxy                           0                   ed2a14ac538d6       registry-proxy-kcr2s
	6b17e94e22dc7       be9cac3585579       4 minutes ago       Running             cloud-spanner-emulator                   0                   7fb2b7a3c3d1e       cloud-spanner-emulator-5b584cc74-9vgs9
	d2b343a3623cd       2f6c962e7b831       4 minutes ago       Running             coredns                                  0                   2acdf62a11f89       coredns-7c65d6cfc9-d82ng
	d8ce8ece9b6de       35508c2f890c4       4 minutes ago       Running             minikube-ingress-dns                     0                   3f4dff69b19d3       kube-ingress-dns-minikube
	a573d9d65b40b       ba04bb24b9575       4 minutes ago       Running             storage-provisioner                      0                   d81c5dbb63482       storage-provisioner
	7922e262c6444       0bcd66b03df5f       4 minutes ago       Running             kindnet-cni                              0                   f4fcf7a39b432       kindnet-kp25n
	e77105cfb7c13       24a140c548c07       4 minutes ago       Running             kube-proxy                               0                   ed67dbbf2b577       kube-proxy-kr2zj
	d9367ce18d8a0       7f8aa378bb47d       5 minutes ago       Running             kube-scheduler                           0                   7d0f78007a318       kube-scheduler-addons-569374
	5661101c56976       d3f53a98c0a9d       5 minutes ago       Running             kube-apiserver                           0                   e5abc24983720       kube-apiserver-addons-569374
	10d1ed2ba466b       279f381cb3736       5 minutes ago       Running             kube-controller-manager                  0                   b374301ad03d9       kube-controller-manager-addons-569374
	1731540f4c13e       27e3830e14027       5 minutes ago       Running             etcd                                     0                   0e0b4832e9914       etcd-addons-569374
	
	
	==> containerd <==
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.892121488Z" level=info msg="TearDown network for sandbox \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\" successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.892161004Z" level=info msg="StopPodSandbox for \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\" returns successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.892812264Z" level=info msg="RemovePodSandbox for \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\""
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.892855201Z" level=info msg="Forcibly stopping sandbox \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\""
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.900444602Z" level=info msg="TearDown network for sandbox \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\" successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.906953570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.907079009Z" level=info msg="RemovePodSandbox \"570be7983e113e731a8a55386231758fc6a707eafe777d03f705c9f099e072f5\" returns successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.907645937Z" level=info msg="StopPodSandbox for \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\""
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.915254973Z" level=info msg="TearDown network for sandbox \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\" successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.915295490Z" level=info msg="StopPodSandbox for \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\" returns successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.916300240Z" level=info msg="RemovePodSandbox for \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\""
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.916362172Z" level=info msg="Forcibly stopping sandbox \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\""
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.924306811Z" level=info msg="TearDown network for sandbox \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\" successfully"
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.930325526Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 14 13:41:47 addons-569374 containerd[812]: time="2024-10-14T13:41:47.930473266Z" level=info msg="RemovePodSandbox \"eb802d57e3e9d0fe745791bec6895ede3b7bf7c0ae04c5115c2823576b5b166d\" returns successfully"
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.935190135Z" level=info msg="RemoveContainer for \"fed6413851fccb5e44edc7eeec1ba3769291284d8e8dc666c872164c529cf37f\""
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.943604762Z" level=info msg="RemoveContainer for \"fed6413851fccb5e44edc7eeec1ba3769291284d8e8dc666c872164c529cf37f\" returns successfully"
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.945625789Z" level=info msg="StopPodSandbox for \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\""
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.970562331Z" level=info msg="TearDown network for sandbox \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\" successfully"
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.970614047Z" level=info msg="StopPodSandbox for \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\" returns successfully"
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.971052312Z" level=info msg="RemovePodSandbox for \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\""
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.971090761Z" level=info msg="Forcibly stopping sandbox \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\""
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.978510951Z" level=info msg="TearDown network for sandbox \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\" successfully"
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.987916468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Oct 14 13:42:47 addons-569374 containerd[812]: time="2024-10-14T13:42:47.988050933Z" level=info msg="RemovePodSandbox \"bcd61d50b9ae9ee2c4386712aeafc20582179e47af57a97616ea383dcaa4b33c\" returns successfully"
	
	
	==> coredns [d2b343a3623cd23acab3da90eea72c762c6ea5ca85eff0f91ec1f5d50e77c2af] <==
	[INFO] 10.244.0.3:47558 - 3014 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000104147s
	[INFO] 10.244.0.3:47558 - 53950 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001762031s
	[INFO] 10.244.0.3:47558 - 3078 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001356955s
	[INFO] 10.244.0.3:47558 - 18502 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00008644s
	[INFO] 10.244.0.3:47558 - 29771 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.00014541s
	[INFO] 10.244.0.3:52423 - 6474 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000116364s
	[INFO] 10.244.0.3:52423 - 6269 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000177985s
	[INFO] 10.244.0.3:59735 - 16937 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055072s
	[INFO] 10.244.0.3:59735 - 17365 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000184212s
	[INFO] 10.244.0.3:41950 - 45142 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052233s
	[INFO] 10.244.0.3:41950 - 44904 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000043881s
	[INFO] 10.244.0.3:55049 - 55458 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001299519s
	[INFO] 10.244.0.3:55049 - 55261 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001237251s
	[INFO] 10.244.0.3:49929 - 44865 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00006263s
	[INFO] 10.244.0.3:49929 - 44709 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000110555s
	[INFO] 10.244.0.25:51909 - 45546 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000139773s
	[INFO] 10.244.0.25:50592 - 12407 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000526469s
	[INFO] 10.244.0.25:36129 - 58382 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168655s
	[INFO] 10.244.0.25:46885 - 10680 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001423s
	[INFO] 10.244.0.25:40042 - 62772 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000141151s
	[INFO] 10.244.0.25:51010 - 4566 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129008s
	[INFO] 10.244.0.25:41514 - 39887 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002328556s
	[INFO] 10.244.0.25:33444 - 35382 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001998852s
	[INFO] 10.244.0.25:56231 - 7108 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000799288s
	[INFO] 10.244.0.25:58277 - 7219 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002789893s
	
	
	==> describe nodes <==
	Name:               addons-569374
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-569374
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=addons-569374
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T13_39_48_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-569374
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-569374"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 13:39:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-569374
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 13:44:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 13:41:49 +0000   Mon, 14 Oct 2024 13:39:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 13:41:49 +0000   Mon, 14 Oct 2024 13:39:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 13:41:49 +0000   Mon, 14 Oct 2024 13:39:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 13:41:49 +0000   Mon, 14 Oct 2024 13:39:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-569374
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 90565d941284427e85a62b418232c2bc
	  System UUID:                f9ec733f-433b-4c5f-a438-e372770916e9
	  Boot ID:                    7f37d908-3a8a-4f73-8f6a-d0166945a75f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-9vgs9       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  gadget                      gadget-5g42s                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-c684cb797-nngwz                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-w6xlh    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m49s
	  kube-system                 coredns-7c65d6cfc9-d82ng                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m59s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-97829                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 etcd-addons-569374                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m5s
	  kube-system                 kindnet-kp25n                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m59s
	  kube-system                 kube-apiserver-addons-569374                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 kube-controller-manager-addons-569374        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-kr2zj                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-scheduler-addons-569374                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 metrics-server-84c5f94fbc-7jpwg              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m53s
	  kube-system                 nvidia-device-plugin-daemonset-spwkr         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 registry-66c9cd494c-zcf42                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-kcr2s                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 snapshot-controller-56fcc65765-4hxrk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-56fcc65765-q7drz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-86d989889c-whxvs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  volcano-system              volcano-admission-5874dfdd79-cg9fd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  volcano-system              volcano-controllers-789ffc5785-wbnfl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  volcano-system              volcano-scheduler-6c9778cbdf-9mm8c           0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-mdcc8               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 4m57s  kube-proxy       
	  Normal   Starting                 5m4s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m4s   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  5m4s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  5m3s   kubelet          Node addons-569374 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m3s   kubelet          Node addons-569374 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m3s   kubelet          Node addons-569374 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m59s  node-controller  Node addons-569374 event: Registered Node addons-569374 in Controller
	
	
	==> dmesg <==
	[Oct14 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014705] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.413719] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.054156] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016129] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.802336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.474781] kauditd_printk_skb: 34 callbacks suppressed
	
	
	==> etcd [1731540f4c13e88ed18c4b7b4b3bedf7a80199626fbd34305c61284a0364a038] <==
	{"level":"info","ts":"2024-10-14T13:39:41.429908Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T13:39:41.429930Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T13:39:41.429938Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-10-14T13:39:41.430191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-10-14T13:39:41.430259Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-10-14T13:39:42.217120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-14T13:39:42.217374Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-14T13:39:42.217479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-10-14T13:39:42.217605Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-10-14T13:39:42.217696Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-14T13:39:42.217793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-10-14T13:39:42.217890Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-10-14T13:39:42.221135Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:42.222161Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-569374 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-14T13:39:42.224054Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:39:42.224548Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-14T13:39:42.229167Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:42.229286Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:42.229322Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-14T13:39:42.230091Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:39:42.230911Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-14T13:39:42.247090Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-10-14T13:39:42.237405Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-14T13:39:42.247541Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-14T13:39:42.249307Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [03a14bceaf7ec3b61094802cedca696e7fa01e01c8c0085bd4969d7f16782e3a] <==
	2024/10/14 13:41:32 GCP Auth Webhook started!
	2024/10/14 13:41:49 Ready to marshal response ...
	2024/10/14 13:41:49 Ready to write response ...
	2024/10/14 13:41:50 Ready to marshal response ...
	2024/10/14 13:41:50 Ready to write response ...
	
	
	==> kernel <==
	 13:44:51 up 27 min,  0 users,  load average: 0.79, 1.31, 0.74
	Linux addons-569374 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7922e262c6444dc95edb02134ac2fafd344846a235b55bd4efe5b711e0a315a9] <==
	I1014 13:42:46.933899       1 main.go:300] handling current node
	I1014 13:42:56.934696       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:42:56.934733       1 main.go:300] handling current node
	I1014 13:43:06.941643       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:06.941679       1 main.go:300] handling current node
	I1014 13:43:16.941144       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:16.941181       1 main.go:300] handling current node
	I1014 13:43:26.942884       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:26.942918       1 main.go:300] handling current node
	I1014 13:43:36.941129       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:36.941163       1 main.go:300] handling current node
	I1014 13:43:46.942938       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:46.943039       1 main.go:300] handling current node
	I1014 13:43:56.934804       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:43:56.934978       1 main.go:300] handling current node
	I1014 13:44:06.940816       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:06.941073       1 main.go:300] handling current node
	I1014 13:44:16.941428       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:16.941470       1 main.go:300] handling current node
	I1014 13:44:26.941220       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:26.941254       1 main.go:300] handling current node
	I1014 13:44:36.940966       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:36.941001       1 main.go:300] handling current node
	I1014 13:44:46.934085       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1014 13:44:46.934320       1 main.go:300] handling current node
	
	
	==> kube-apiserver [5661101c5697645b9338d5008bea67d81d2ea4c8b8490a5ad435f0957b709bf9] <==
	W1014 13:40:45.532646       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:46.541909       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:47.582997       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:48.536439       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.142.159:443: connect: connection refused
	E1014 13:40:48.536479       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.142.159:443: connect: connection refused" logger="UnhandledError"
	W1014 13:40:48.538388       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:48.607425       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:49.704944       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:50.756564       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:51.817007       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:52.917880       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:53.924607       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:54.933180       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:55.982952       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:57.064225       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:58.133336       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:40:59.178822       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.13.49:443: connect: connection refused
	W1014 13:41:09.596898       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.142.159:443: connect: connection refused
	E1014 13:41:09.596937       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.142.159:443: connect: connection refused" logger="UnhandledError"
	W1014 13:41:09.633775       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.142.159:443: connect: connection refused
	E1014 13:41:09.633817       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.142.159:443: connect: connection refused" logger="UnhandledError"
	W1014 13:41:29.504370       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.142.159:443: connect: connection refused
	E1014 13:41:29.504411       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.142.159:443: connect: connection refused" logger="UnhandledError"
	I1014 13:41:49.568398       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I1014 13:41:49.634447       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [10d1ed2ba466b7f51ef22b2c60386f28be38f19308c78397f3ed12e98649cd0e] <==
	I1014 13:41:10.510170       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:10.543108       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1014 13:41:11.544317       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:12.566536       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:12.656979       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1014 13:41:12.675050       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:13.574984       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:13.584326       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:13.590537       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I1014 13:41:13.667205       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1014 13:41:13.677275       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1014 13:41:13.683264       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I1014 13:41:18.926678       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-569374"
	I1014 13:41:29.526528       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="25.013353ms"
	I1014 13:41:29.546872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="19.865341ms"
	I1014 13:41:29.547069       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="154.936µs"
	I1014 13:41:29.571954       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="49.353µs"
	I1014 13:41:32.632550       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="15.856077ms"
	I1014 13:41:32.633005       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-c684cb797" duration="48.336µs"
	I1014 13:41:43.021285       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1014 13:41:43.025899       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1014 13:41:43.071477       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I1014 13:41:43.073914       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I1014 13:41:49.296945       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	I1014 13:41:49.567872       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-569374"
	
	
	==> kube-proxy [e77105cfb7c1358aa5753ae73fa06623b4154ef8b9614dd6fd6cef0ea6b9f78b] <==
	I1014 13:39:54.393266       1 server_linux.go:66] "Using iptables proxy"
	I1014 13:39:54.478117       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1014 13:39:54.478207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1014 13:39:54.512137       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1014 13:39:54.512200       1 server_linux.go:169] "Using iptables Proxier"
	I1014 13:39:54.517618       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1014 13:39:54.518080       1 server.go:483] "Version info" version="v1.31.1"
	I1014 13:39:54.518093       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1014 13:39:54.519508       1 config.go:199] "Starting service config controller"
	I1014 13:39:54.519543       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1014 13:39:54.519575       1 config.go:105] "Starting endpoint slice config controller"
	I1014 13:39:54.519580       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1014 13:39:54.525916       1 config.go:328] "Starting node config controller"
	I1014 13:39:54.525936       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1014 13:39:54.620109       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1014 13:39:54.620175       1 shared_informer.go:320] Caches are synced for service config
	I1014 13:39:54.627056       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [d9367ce18d8a06302f26d561c17364ec53bd80546327e0257535d04426c8dd69] <==
	W1014 13:39:45.505576       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.505596       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.505677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1014 13:39:45.505694       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.505757       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 13:39:45.505774       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.505826       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.505843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.505905       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 13:39:45.505922       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.505972       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 13:39:45.505989       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.506038       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 13:39:45.506054       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.506115       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.506133       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.506296       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:45.506321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.506398       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1014 13:39:45.506417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:45.506489       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 13:39:45.506507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1014 13:39:46.316117       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 13:39:46.316347       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1014 13:39:47.095897       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 14 13:41:29 addons-569374 kubelet[1483]: E1014 13:41:29.535110    1483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0dfd08ae-d9f7-4785-9e12-7b37bc3ad31e" containerName="create"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: E1014 13:41:29.535616    1483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a2ed393-cc5e-4135-9e1e-8228be8a36fc" containerName="patch"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: E1014 13:41:29.535696    1483 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5a2ed393-cc5e-4135-9e1e-8228be8a36fc" containerName="patch"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.535838    1483 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2ed393-cc5e-4135-9e1e-8228be8a36fc" containerName="patch"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.535909    1483 memory_manager.go:354] "RemoveStaleState removing state" podUID="0dfd08ae-d9f7-4785-9e12-7b37bc3ad31e" containerName="create"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.679820    1483 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1757b191-c71b-4283-b955-e1783c06e815-webhook-certs\") pod \"gcp-auth-c684cb797-nngwz\" (UID: \"1757b191-c71b-4283-b955-e1783c06e815\") " pod="gcp-auth/gcp-auth-c684cb797-nngwz"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.679885    1483 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1757b191-c71b-4283-b955-e1783c06e815-gcp-creds\") pod \"gcp-auth-c684cb797-nngwz\" (UID: \"1757b191-c71b-4283-b955-e1783c06e815\") " pod="gcp-auth/gcp-auth-c684cb797-nngwz"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.679917    1483 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7xj\" (UniqueName: \"kubernetes.io/projected/1757b191-c71b-4283-b955-e1783c06e815-kube-api-access-tx7xj\") pod \"gcp-auth-c684cb797-nngwz\" (UID: \"1757b191-c71b-4283-b955-e1783c06e815\") " pod="gcp-auth/gcp-auth-c684cb797-nngwz"
	Oct 14 13:41:29 addons-569374 kubelet[1483]: I1014 13:41:29.679947    1483 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-project\" (UniqueName: \"kubernetes.io/host-path/1757b191-c71b-4283-b955-e1783c06e815-gcp-project\") pod \"gcp-auth-c684cb797-nngwz\" (UID: \"1757b191-c71b-4283-b955-e1783c06e815\") " pod="gcp-auth/gcp-auth-c684cb797-nngwz"
	Oct 14 13:41:31 addons-569374 kubelet[1483]: I1014 13:41:31.786581    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zcf42" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:41:32 addons-569374 kubelet[1483]: I1014 13:41:32.615974    1483 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-c684cb797-nngwz" podStartSLOduration=1.379707462 podStartE2EDuration="3.615945757s" podCreationTimestamp="2024-10-14 13:41:29 +0000 UTC" firstStartedPulling="2024-10-14 13:41:29.980641962 +0000 UTC m=+102.295253247" lastFinishedPulling="2024-10-14 13:41:32.216880257 +0000 UTC m=+104.531491542" observedRunningTime="2024-10-14 13:41:32.614378967 +0000 UTC m=+104.928990260" watchObservedRunningTime="2024-10-14 13:41:32.615945757 +0000 UTC m=+104.930557041"
	Oct 14 13:41:32 addons-569374 kubelet[1483]: I1014 13:41:32.784598    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kcr2s" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:41:43 addons-569374 kubelet[1483]: I1014 13:41:43.787680    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dfd08ae-d9f7-4785-9e12-7b37bc3ad31e" path="/var/lib/kubelet/pods/0dfd08ae-d9f7-4785-9e12-7b37bc3ad31e/volumes"
	Oct 14 13:41:43 addons-569374 kubelet[1483]: I1014 13:41:43.788129    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2ed393-cc5e-4135-9e1e-8228be8a36fc" path="/var/lib/kubelet/pods/5a2ed393-cc5e-4135-9e1e-8228be8a36fc/volumes"
	Oct 14 13:41:47 addons-569374 kubelet[1483]: I1014 13:41:47.860926    1483 scope.go:117] "RemoveContainer" containerID="6a28618c5848d79c114ed652ffb4d0f977182c6117f5ee6be50c16e9bc2394a3"
	Oct 14 13:41:47 addons-569374 kubelet[1483]: I1014 13:41:47.869739    1483 scope.go:117] "RemoveContainer" containerID="93a6978c9af05cb797861f520b098730fefd97223d5c4efe2ff09541b5d2b216"
	Oct 14 13:41:49 addons-569374 kubelet[1483]: I1014 13:41:49.788017    1483 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255fff8d-f975-4af7-b6ef-9fe8946182b6" path="/var/lib/kubelet/pods/255fff8d-f975-4af7-b6ef-9fe8946182b6/volumes"
	Oct 14 13:42:01 addons-569374 kubelet[1483]: I1014 13:42:01.784652    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-spwkr" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:42:46 addons-569374 kubelet[1483]: I1014 13:42:46.784334    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kcr2s" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:42:47 addons-569374 kubelet[1483]: I1014 13:42:47.933234    1483 scope.go:117] "RemoveContainer" containerID="fed6413851fccb5e44edc7eeec1ba3769291284d8e8dc666c872164c529cf37f"
	Oct 14 13:42:50 addons-569374 kubelet[1483]: I1014 13:42:50.784347    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zcf42" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:43:04 addons-569374 kubelet[1483]: I1014 13:43:04.785005    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-spwkr" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:44:03 addons-569374 kubelet[1483]: I1014 13:44:03.787950    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-zcf42" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:44:05 addons-569374 kubelet[1483]: I1014 13:44:05.784785    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-spwkr" secret="" err="secret \"gcp-auth\" not found"
	Oct 14 13:44:15 addons-569374 kubelet[1483]: I1014 13:44:15.785034    1483 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kcr2s" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [a573d9d65b40baf352a6a1d340267a5afea9ee43152c1109015ec43dd7d58230] <==
	I1014 13:39:59.441433       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 13:39:59.465025       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 13:39:59.465108       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 13:39:59.481504       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 13:39:59.483752       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-569374_b31608b7-cf2a-4264-a303-c4640296f7be!
	I1014 13:39:59.485470       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bfbf63e-d18d-497b-a17c-ff87b5587c5d", APIVersion:"v1", ResourceVersion:"559", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-569374_b31608b7-cf2a-4264-a303-c4640296f7be became leader
	I1014 13:39:59.584216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-569374_b31608b7-cf2a-4264-a303-c4640296f7be!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-569374 -n addons-569374
helpers_test.go:261: (dbg) Run:  kubectl --context addons-569374 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-8gqvc ingress-nginx-admission-patch-fnpqk test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-569374 describe pod ingress-nginx-admission-create-8gqvc ingress-nginx-admission-patch-fnpqk test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-569374 describe pod ingress-nginx-admission-create-8gqvc ingress-nginx-admission-patch-fnpqk test-job-nginx-0: exit status 1 (88.373575ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8gqvc" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-fnpqk" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-569374 describe pod ingress-nginx-admission-create-8gqvc ingress-nginx-admission-patch-fnpqk test-job-nginx-0: exit status 1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable volcano --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable volcano --alsologtostderr -v=1: (11.141791704s)
--- FAIL: TestAddons/serial/Volcano (210.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (382.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-805757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-805757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m19.588396071s)

                                                
                                                
-- stdout --
	* [old-k8s-version-805757] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-805757" primary control-plane node in "old-k8s-version-805757" cluster
	* Pulling base image v0.0.45-1728382586-19774 ...
	* Restarting existing docker container for "old-k8s-version-805757" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-805757 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:27:36.574849  216259 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:27:36.575013  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:36.575019  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:27:36.575025  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:27:36.575309  216259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 14:27:36.575648  216259 out.go:352] Setting JSON to false
	I1014 14:27:36.576551  216259 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4208,"bootTime":1728911849,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 14:27:36.576616  216259 start.go:139] virtualization:  
	I1014 14:27:36.589332  216259 out.go:177] * [old-k8s-version-805757] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 14:27:36.597904  216259 notify.go:220] Checking for updates...
	I1014 14:27:36.599630  216259 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:27:36.601455  216259 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:27:36.603798  216259 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:27:36.605330  216259 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 14:27:36.607019  216259 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 14:27:36.608830  216259 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:27:36.611055  216259 config.go:182] Loaded profile config "old-k8s-version-805757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1014 14:27:36.613386  216259 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1014 14:27:36.614928  216259 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:27:36.651119  216259 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 14:27:36.651245  216259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:27:36.730420  216259 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-14 14:27:36.719229725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:27:36.730526  216259 docker.go:318] overlay module found
	I1014 14:27:36.732921  216259 out.go:177] * Using the docker driver based on existing profile
	I1014 14:27:36.734541  216259 start.go:297] selected driver: docker
	I1014 14:27:36.734563  216259 start.go:901] validating driver "docker" against &{Name:old-k8s-version-805757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-805757 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:36.734759  216259 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:27:36.735472  216259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:27:36.830946  216259 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:67 SystemTime:2024-10-14 14:27:36.821326882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:27:36.831362  216259 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:27:36.831383  216259 cni.go:84] Creating CNI manager for ""
	I1014 14:27:36.831426  216259 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 14:27:36.831464  216259 start.go:340] cluster config:
	{Name:old-k8s-version-805757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-805757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:36.833567  216259 out.go:177] * Starting "old-k8s-version-805757" primary control-plane node in "old-k8s-version-805757" cluster
	I1014 14:27:36.835556  216259 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1014 14:27:36.837184  216259 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1014 14:27:36.838691  216259 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1014 14:27:36.838734  216259 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1014 14:27:36.838742  216259 cache.go:56] Caching tarball of preloaded images
	I1014 14:27:36.838823  216259 preload.go:172] Found /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1014 14:27:36.838831  216259 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I1014 14:27:36.838948  216259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/config.json ...
	I1014 14:27:36.839156  216259 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 14:27:36.865659  216259 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1014 14:27:36.865682  216259 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1014 14:27:36.865695  216259 cache.go:194] Successfully downloaded all kic artifacts
	I1014 14:27:36.865731  216259 start.go:360] acquireMachinesLock for old-k8s-version-805757: {Name:mk36395827bf8971882c1d4807a31017310a8cc1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:27:36.865785  216259 start.go:364] duration metric: took 33.773µs to acquireMachinesLock for "old-k8s-version-805757"
	I1014 14:27:36.865804  216259 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:27:36.865809  216259 fix.go:54] fixHost starting: 
	I1014 14:27:36.866073  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:36.888259  216259 fix.go:112] recreateIfNeeded on old-k8s-version-805757: state=Stopped err=<nil>
	W1014 14:27:36.888421  216259 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:27:36.890718  216259 out.go:177] * Restarting existing docker container for "old-k8s-version-805757" ...
	I1014 14:27:36.893736  216259 cli_runner.go:164] Run: docker start old-k8s-version-805757
	I1014 14:27:37.284888  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:37.309737  216259 kic.go:430] container "old-k8s-version-805757" state is running.
	I1014 14:27:37.310108  216259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-805757
	I1014 14:27:37.356882  216259 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/config.json ...
	I1014 14:27:37.357133  216259 machine.go:93] provisionDockerMachine start ...
	I1014 14:27:37.357196  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:37.379134  216259 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:37.379448  216259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1014 14:27:37.379458  216259 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:27:37.380184  216259 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34078->127.0.0.1:33063: read: connection reset by peer
	I1014 14:27:40.512632  216259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-805757
	
	I1014 14:27:40.512662  216259 ubuntu.go:169] provisioning hostname "old-k8s-version-805757"
	I1014 14:27:40.512727  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:40.538934  216259 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:40.539211  216259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1014 14:27:40.539229  216259 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-805757 && echo "old-k8s-version-805757" | sudo tee /etc/hostname
	I1014 14:27:40.694390  216259 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-805757
	
	I1014 14:27:40.694474  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:40.719097  216259 main.go:141] libmachine: Using SSH client type: native
	I1014 14:27:40.719924  216259 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33063 <nil> <nil>}
	I1014 14:27:40.719958  216259 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-805757' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-805757/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-805757' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:27:40.857815  216259 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:27:40.857855  216259 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19790-2229/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-2229/.minikube}
	I1014 14:27:40.857880  216259 ubuntu.go:177] setting up certificates
	I1014 14:27:40.857890  216259 provision.go:84] configureAuth start
	I1014 14:27:40.857964  216259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-805757
	I1014 14:27:40.879388  216259 provision.go:143] copyHostCerts
	I1014 14:27:40.879460  216259 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem, removing ...
	I1014 14:27:40.879481  216259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem
	I1014 14:27:40.879562  216259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem (1082 bytes)
	I1014 14:27:40.879677  216259 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem, removing ...
	I1014 14:27:40.879688  216259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem
	I1014 14:27:40.879721  216259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem (1123 bytes)
	I1014 14:27:40.879829  216259 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem, removing ...
	I1014 14:27:40.879839  216259 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem
	I1014 14:27:40.879865  216259 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem (1679 bytes)
	I1014 14:27:40.879927  216259 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-805757 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-805757]
	I1014 14:27:42.031562  216259 provision.go:177] copyRemoteCerts
	I1014 14:27:42.031654  216259 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:27:42.031706  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:42.051614  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:42.156913  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 14:27:42.263225  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1014 14:27:42.312968  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 14:27:42.346019  216259 provision.go:87] duration metric: took 1.488110465s to configureAuth
	I1014 14:27:42.346103  216259 ubuntu.go:193] setting minikube options for container-runtime
	I1014 14:27:42.346370  216259 config.go:182] Loaded profile config "old-k8s-version-805757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1014 14:27:42.346413  216259 machine.go:96] duration metric: took 4.989269165s to provisionDockerMachine
	I1014 14:27:42.346436  216259 start.go:293] postStartSetup for "old-k8s-version-805757" (driver="docker")
	I1014 14:27:42.346460  216259 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:27:42.346567  216259 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:27:42.346673  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:42.370133  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:42.475527  216259 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:27:42.480227  216259 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 14:27:42.480264  216259 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1014 14:27:42.480276  216259 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1014 14:27:42.480283  216259 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1014 14:27:42.480293  216259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/addons for local assets ...
	I1014 14:27:42.480363  216259 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/files for local assets ...
	I1014 14:27:42.480453  216259 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem -> 75422.pem in /etc/ssl/certs
	I1014 14:27:42.480564  216259 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:27:42.491328  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem --> /etc/ssl/certs/75422.pem (1708 bytes)
	I1014 14:27:42.520607  216259 start.go:296] duration metric: took 174.142205ms for postStartSetup
	I1014 14:27:42.520767  216259 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:27:42.520852  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:42.540966  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:42.634626  216259 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 14:27:42.639288  216259 fix.go:56] duration metric: took 5.773471505s for fixHost
	I1014 14:27:42.639314  216259 start.go:83] releasing machines lock for "old-k8s-version-805757", held for 5.773520285s
	I1014 14:27:42.639384  216259 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-805757
	I1014 14:27:42.656999  216259 ssh_runner.go:195] Run: cat /version.json
	I1014 14:27:42.657102  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:42.657260  216259 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:27:42.657326  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:42.682707  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:42.691919  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:42.929938  216259 ssh_runner.go:195] Run: systemctl --version
	I1014 14:27:42.935090  216259 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 14:27:42.939788  216259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1014 14:27:42.958788  216259 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1014 14:27:42.958916  216259 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:27:42.969289  216259 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:27:42.969366  216259 start.go:495] detecting cgroup driver to use...
	I1014 14:27:42.969414  216259 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 14:27:42.969493  216259 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 14:27:42.985631  216259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 14:27:43.000483  216259 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:27:43.000617  216259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:27:43.016463  216259 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:27:43.030097  216259 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:27:43.140481  216259 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:27:43.262691  216259 docker.go:233] disabling docker service ...
	I1014 14:27:43.262756  216259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:27:43.278508  216259 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:27:43.290151  216259 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:27:43.411274  216259 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:27:43.508335  216259 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:27:43.522485  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:27:43.540245  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I1014 14:27:43.550661  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 14:27:43.561004  216259 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 14:27:43.561102  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 14:27:43.571338  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 14:27:43.581967  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 14:27:43.592442  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 14:27:43.602744  216259 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:27:43.612320  216259 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 14:27:43.622639  216259 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:27:43.632922  216259 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:27:43.641920  216259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:27:43.747256  216259 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 14:27:44.034974  216259 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1014 14:27:44.035087  216259 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1014 14:27:44.040000  216259 start.go:563] Will wait 60s for crictl version
	I1014 14:27:44.040130  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:27:44.045306  216259 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:27:44.129065  216259 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1014 14:27:44.129174  216259 ssh_runner.go:195] Run: containerd --version
	I1014 14:27:44.160838  216259 ssh_runner.go:195] Run: containerd --version
	I1014 14:27:44.190651  216259 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I1014 14:27:44.192302  216259 cli_runner.go:164] Run: docker network inspect old-k8s-version-805757 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 14:27:44.214729  216259 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1014 14:27:44.218624  216259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:27:44.239942  216259 kubeadm.go:883] updating cluster {Name:old-k8s-version-805757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-805757 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:27:44.240076  216259 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1014 14:27:44.240138  216259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:27:44.324313  216259 containerd.go:627] all images are preloaded for containerd runtime.
	I1014 14:27:44.324334  216259 containerd.go:534] Images already preloaded, skipping extraction
	I1014 14:27:44.324394  216259 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:27:44.380431  216259 containerd.go:627] all images are preloaded for containerd runtime.
	I1014 14:27:44.380457  216259 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:27:44.380465  216259 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.20.0 containerd true true} ...
	I1014 14:27:44.380574  216259 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-805757 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-805757 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:27:44.380648  216259 ssh_runner.go:195] Run: sudo crictl info
	I1014 14:27:44.433530  216259 cni.go:84] Creating CNI manager for ""
	I1014 14:27:44.433558  216259 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 14:27:44.433570  216259 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:27:44.433590  216259 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-805757 NodeName:old-k8s-version-805757 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1014 14:27:44.433717  216259 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-805757"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:27:44.433789  216259 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1014 14:27:44.443907  216259 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:27:44.443996  216259 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:27:44.453466  216259 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I1014 14:27:44.473348  216259 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:27:44.492947  216259 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I1014 14:27:44.513071  216259 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1014 14:27:44.516739  216259 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:27:44.528168  216259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:27:44.638275  216259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:27:44.654918  216259 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757 for IP: 192.168.85.2
	I1014 14:27:44.654942  216259 certs.go:194] generating shared ca certs ...
	I1014 14:27:44.654986  216259 certs.go:226] acquiring lock for ca certs: {Name:mk2a77364a9bb2b8250d1aa5761db5ebc543c9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:27:44.655174  216259 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key
	I1014 14:27:44.655254  216259 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key
	I1014 14:27:44.655268  216259 certs.go:256] generating profile certs ...
	I1014 14:27:44.655370  216259 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.key
	I1014 14:27:44.655460  216259 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/apiserver.key.f1bfd56b
	I1014 14:27:44.655540  216259 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/proxy-client.key
	I1014 14:27:44.655677  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542.pem (1338 bytes)
	W1014 14:27:44.655729  216259 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542_empty.pem, impossibly tiny 0 bytes
	I1014 14:27:44.655746  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 14:27:44.655772  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem (1082 bytes)
	I1014 14:27:44.655831  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:27:44.655862  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem (1679 bytes)
	I1014 14:27:44.655928  216259 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem (1708 bytes)
	I1014 14:27:44.656570  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:27:44.683448  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:27:44.736274  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:27:44.791461  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 14:27:44.872068  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1014 14:27:44.903913  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:27:44.930282  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:27:44.956737  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:27:44.982730  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:27:45.034351  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542.pem --> /usr/share/ca-certificates/7542.pem (1338 bytes)
	I1014 14:27:45.064737  216259 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem --> /usr/share/ca-certificates/75422.pem (1708 bytes)
	I1014 14:27:45.096878  216259 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:27:45.120189  216259 ssh_runner.go:195] Run: openssl version
	I1014 14:27:45.130139  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75422.pem && ln -fs /usr/share/ca-certificates/75422.pem /etc/ssl/certs/75422.pem"
	I1014 14:27:45.145400  216259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75422.pem
	I1014 14:27:45.150838  216259 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:48 /usr/share/ca-certificates/75422.pem
	I1014 14:27:45.150960  216259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75422.pem
	I1014 14:27:45.160970  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75422.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:27:45.173773  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:27:45.187093  216259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:27:45.192036  216259 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:27:45.192142  216259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:27:45.200799  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:27:45.213260  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7542.pem && ln -fs /usr/share/ca-certificates/7542.pem /etc/ssl/certs/7542.pem"
	I1014 14:27:45.227460  216259 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7542.pem
	I1014 14:27:45.232668  216259 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:48 /usr/share/ca-certificates/7542.pem
	I1014 14:27:45.232784  216259 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7542.pem
	I1014 14:27:45.242036  216259 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7542.pem /etc/ssl/certs/51391683.0"
	I1014 14:27:45.254218  216259 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:27:45.259514  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:27:45.268815  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:27:45.278668  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:27:45.287981  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:27:45.297250  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:27:45.306137  216259 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:27:45.316696  216259 kubeadm.go:392] StartCluster: {Name:old-k8s-version-805757 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-805757 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:27:45.316846  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1014 14:27:45.316974  216259 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:27:45.379823  216259 cri.go:89] found id: "a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:27:45.379883  216259 cri.go:89] found id: "7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:27:45.379891  216259 cri.go:89] found id: "aabf9f2cec5a9a09399eb6272607dd7d37b3bef142cd296f21435dd0a98c849c"
	I1014 14:27:45.379897  216259 cri.go:89] found id: "2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:27:45.379977  216259 cri.go:89] found id: "b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:27:45.380001  216259 cri.go:89] found id: "c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:27:45.380020  216259 cri.go:89] found id: "a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:27:45.380059  216259 cri.go:89] found id: "c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:27:45.380074  216259 cri.go:89] found id: ""
	I1014 14:27:45.380219  216259 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1014 14:27:45.398017  216259 cri.go:116] JSON = null
	W1014 14:27:45.398113  216259 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1014 14:27:45.398229  216259 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 14:27:45.413907  216259 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 14:27:45.413975  216259 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 14:27:45.414054  216259 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 14:27:45.429195  216259 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:27:45.429687  216259 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-805757" does not appear in /home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:27:45.429838  216259 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-2229/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-805757" cluster setting kubeconfig missing "old-k8s-version-805757" context setting]
	I1014 14:27:45.430347  216259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/kubeconfig: {Name:mk7703bee112acb0d700fbfe8aa7245ea0dd07d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:27:45.432532  216259 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 14:27:45.447397  216259 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I1014 14:27:45.447433  216259 kubeadm.go:597] duration metric: took 33.437033ms to restartPrimaryControlPlane
	I1014 14:27:45.447443  216259 kubeadm.go:394] duration metric: took 130.760185ms to StartCluster
	I1014 14:27:45.447460  216259 settings.go:142] acquiring lock: {Name:mk7dda8238a0606dcfbe3db5d257a14d7d308979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:27:45.447522  216259 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:27:45.448270  216259 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/kubeconfig: {Name:mk7703bee112acb0d700fbfe8aa7245ea0dd07d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:27:45.448707  216259 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1014 14:27:45.449489  216259 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 14:27:45.449679  216259 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-805757"
	I1014 14:27:45.450035  216259 config.go:182] Loaded profile config "old-k8s-version-805757": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I1014 14:27:45.450067  216259 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-805757"
	W1014 14:27:45.450097  216259 addons.go:243] addon storage-provisioner should already be in state true
	I1014 14:27:45.450121  216259 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-805757"
	I1014 14:27:45.450129  216259 host.go:66] Checking if "old-k8s-version-805757" exists ...
	I1014 14:27:45.450133  216259 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-805757"
	I1014 14:27:45.450417  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:45.450591  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:45.450869  216259 addons.go:69] Setting dashboard=true in profile "old-k8s-version-805757"
	I1014 14:27:45.450895  216259 addons.go:234] Setting addon dashboard=true in "old-k8s-version-805757"
	W1014 14:27:45.450903  216259 addons.go:243] addon dashboard should already be in state true
	I1014 14:27:45.450934  216259 host.go:66] Checking if "old-k8s-version-805757" exists ...
	I1014 14:27:45.451413  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:45.456687  216259 out.go:177] * Verifying Kubernetes components...
	I1014 14:27:45.456894  216259 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-805757"
	I1014 14:27:45.456914  216259 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-805757"
	W1014 14:27:45.456921  216259 addons.go:243] addon metrics-server should already be in state true
	I1014 14:27:45.456954  216259 host.go:66] Checking if "old-k8s-version-805757" exists ...
	I1014 14:27:45.457535  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:45.459817  216259 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:27:45.517022  216259 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 14:27:45.518574  216259 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 14:27:45.518597  216259 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 14:27:45.518671  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:45.541890  216259 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-805757"
	W1014 14:27:45.541912  216259 addons.go:243] addon default-storageclass should already be in state true
	I1014 14:27:45.541937  216259 host.go:66] Checking if "old-k8s-version-805757" exists ...
	I1014 14:27:45.542383  216259 cli_runner.go:164] Run: docker container inspect old-k8s-version-805757 --format={{.State.Status}}
	I1014 14:27:45.544695  216259 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:27:45.544754  216259 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 14:27:45.546686  216259 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1014 14:27:45.546860  216259 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:45.546869  216259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 14:27:45.546931  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:45.548434  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 14:27:45.548459  216259 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 14:27:45.548515  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:45.596272  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:45.621312  216259 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 14:27:45.621337  216259 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 14:27:45.621398  216259 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-805757
	I1014 14:27:45.622915  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:45.639485  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:45.655606  216259 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/old-k8s-version-805757/id_rsa Username:docker}
	I1014 14:27:45.698761  216259 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:27:45.723995  216259 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-805757" to be "Ready" ...
	I1014 14:27:45.749592  216259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 14:27:45.749653  216259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 14:27:45.779098  216259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 14:27:45.779162  216259 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 14:27:45.805111  216259 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:45.805174  216259 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1014 14:27:45.828676  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 14:27:45.828739  216259 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 14:27:45.852879  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:45.879650  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 14:27:45.879801  216259 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 14:27:45.885619  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:45.910975  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:27:46.001239  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 14:27:46.001329  216259 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 14:27:46.121750  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 14:27:46.121830  216259 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1014 14:27:46.134796  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.134882  216259 retry.go:31] will retry after 364.307681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.212260  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 14:27:46.212333  216259 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1014 14:27:46.252944  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.253038  216259 retry.go:31] will retry after 149.01469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:46.278725  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.278796  216259 retry.go:31] will retry after 230.843375ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.282287  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 14:27:46.282348  216259 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 14:27:46.301466  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 14:27:46.301539  216259 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 14:27:46.320432  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 14:27:46.320507  216259 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 14:27:46.340089  216259 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:27:46.340161  216259 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 14:27:46.360107  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:27:46.403195  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 14:27:46.474344  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.474431  216259 retry.go:31] will retry after 329.870898ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.499671  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:46.510005  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 14:27:46.601387  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.601462  216259 retry.go:31] will retry after 370.398285ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:46.680974  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.681092  216259 retry.go:31] will retry after 287.952857ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:46.681613  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.681640  216259 retry.go:31] will retry after 305.401897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.804689  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1014 14:27:46.874772  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.874805  216259 retry.go:31] will retry after 426.961801ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:46.970152  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:46.972367  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:46.987779  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 14:27:47.186157  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.186206  216259 retry.go:31] will retry after 337.588965ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:47.248893  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.248933  216259 retry.go:31] will retry after 648.685732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:47.267643  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.267678  216259 retry.go:31] will retry after 361.31286ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.302936  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1014 14:27:47.447308  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.447344  216259 retry.go:31] will retry after 425.929544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.524637  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:47.630074  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 14:27:47.672149  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.672197  216259 retry.go:31] will retry after 1.231428878s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.724606  216259 node_ready.go:53] error getting node "old-k8s-version-805757": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-805757": dial tcp 192.168.85.2:8443: connect: connection refused
	W1014 14:27:47.802454  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.802491  216259 retry.go:31] will retry after 455.66598ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:47.873771  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:27:47.898197  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 14:27:48.072124  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:48.072168  216259 retry.go:31] will retry after 1.1856912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:48.128547  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:48.128617  216259 retry.go:31] will retry after 946.14367ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:48.258352  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 14:27:48.351745  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:48.351788  216259 retry.go:31] will retry after 1.307119912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:48.903847  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1014 14:27:49.026682  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.026719  216259 retry.go:31] will retry after 1.33977639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.075818  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1014 14:27:49.212770  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.212810  216259 retry.go:31] will retry after 1.039114528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.258574  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1014 14:27:49.402272  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.402307  216259 retry.go:31] will retry after 843.149693ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.659678  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:27:49.725409  216259 node_ready.go:53] error getting node "old-k8s-version-805757": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-805757": dial tcp 192.168.85.2:8443: connect: connection refused
	W1014 14:27:49.774173  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:49.774217  216259 retry.go:31] will retry after 2.651550977s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:50.246523  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:27:50.252799  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:50.367307  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1014 14:27:50.442830  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:50.442886  216259 retry.go:31] will retry after 2.288534132s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:50.460973  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:50.461020  216259 retry.go:31] will retry after 2.240953048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:50.560821  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:50.560857  216259 retry.go:31] will retry after 2.519470186s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:52.224605  216259 node_ready.go:53] error getting node "old-k8s-version-805757": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-805757": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 14:27:52.426802  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1014 14:27:52.535150  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:52.535236  216259 retry.go:31] will retry after 4.224456955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:52.702490  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:52.731626  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1014 14:27:52.792098  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:52.792131  216259 retry.go:31] will retry after 3.732902457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W1014 14:27:52.824174  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:52.824208  216259 retry.go:31] will retry after 3.042124506s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:53.080538  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W1014 14:27:53.220260  216259 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:53.220299  216259 retry.go:31] will retry after 3.4817905s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1014 14:27:54.224969  216259 node_ready.go:53] error getting node "old-k8s-version-805757": Get "https://192.168.85.2:8443/api/v1/nodes/old-k8s-version-805757": dial tcp 192.168.85.2:8443: connect: connection refused
	I1014 14:27:55.866688  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:27:56.525346  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:27:56.703132  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:27:56.760414  216259 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:28:04.571692  216259 node_ready.go:49] node "old-k8s-version-805757" has status "Ready":"True"
	I1014 14:28:04.571723  216259 node_ready.go:38] duration metric: took 18.847699191s for node "old-k8s-version-805757" to be "Ready" ...
	I1014 14:28:04.571736  216259 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:28:04.878126  216259 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-x5x6d" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.131328  216259 pod_ready.go:93] pod "coredns-74ff55c5b-x5x6d" in "kube-system" namespace has status "Ready":"True"
	I1014 14:28:05.131358  216259 pod_ready.go:82] duration metric: took 253.187774ms for pod "coredns-74ff55c5b-x5x6d" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.131381  216259 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.272784  216259 pod_ready.go:93] pod "etcd-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:28:05.272824  216259 pod_ready.go:82] duration metric: took 141.421476ms for pod "etcd-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.272841  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.355399  216259 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:28:05.355434  216259 pod_ready.go:82] duration metric: took 82.583371ms for pod "kube-apiserver-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:05.355448  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:28:07.037548  216259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (11.170807008s)
	I1014 14:28:07.037751  216259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.512372261s)
	I1014 14:28:07.037849  216259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.334688662s)
	I1014 14:28:07.037865  216259 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-805757"
	I1014 14:28:07.037899  216259 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (10.277447717s)
	I1014 14:28:07.039582  216259 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-805757 addons enable metrics-server
	
	I1014 14:28:07.045310  216259 out.go:177] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	I1014 14:28:07.047158  216259 addons.go:510] duration metric: took 21.597717708s for enable addons: enabled=[storage-provisioner metrics-server dashboard default-storageclass]
	I1014 14:28:07.362898  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:09.861351  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:12.361673  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:14.862617  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:17.362406  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:19.861915  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:21.863694  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:23.863876  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:26.364369  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:28.867852  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:30.897193  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:33.363340  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:35.861756  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:37.884945  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:40.363153  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:42.861707  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:44.861763  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:46.862318  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:49.362381  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:51.863570  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:54.361591  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:56.362014  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:58.389723  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:00.862532  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:03.362440  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:05.362777  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:07.364010  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:09.368100  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:11.862636  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:13.864720  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:16.362492  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:18.862570  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:21.363558  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:23.862380  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:26.362144  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:28.862409  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:31.362871  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:32.362216  216259 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.362286  216259 pod_ready.go:82] duration metric: took 1m27.00682947s for pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.362302  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nj7wx" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.367368  216259 pod_ready.go:93] pod "kube-proxy-nj7wx" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.367392  216259 pod_ready.go:82] duration metric: took 5.08207ms for pod "kube-proxy-nj7wx" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.367405  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.372466  216259 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.372491  216259 pod_ready.go:82] duration metric: took 5.077926ms for pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.372504  216259 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:34.379365  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:36.879386  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:38.884745  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:41.378829  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:43.379493  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:45.419448  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:47.879361  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:49.879585  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:52.378918  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:54.379170  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:56.879120  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:58.879718  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:00.881010  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:03.379169  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:05.380331  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:07.879010  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:09.879270  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:11.885034  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:14.379451  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:16.879390  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:19.379693  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:21.382557  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:23.878999  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:26.379088  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:28.878749  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:31.378849  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:33.378999  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:35.379374  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:37.379992  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:39.879036  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:41.879092  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:44.378523  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:46.379123  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:48.879247  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:50.880513  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:53.378959  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:55.379151  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:57.878997  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:59.879145  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:01.882814  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:04.379368  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:06.386347  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:08.878519  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:10.878795  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:12.879028  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:14.879477  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:17.378629  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:19.379260  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:21.879187  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:23.886598  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:26.379060  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:28.879813  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:30.890781  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:33.380293  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:35.880961  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:38.379273  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:40.379418  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:42.379580  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:44.879477  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:47.378646  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:49.379027  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:51.879180  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:53.879232  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:56.379016  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:58.379324  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:00.421269  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:02.879330  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:05.379215  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:07.878919  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:09.879783  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:11.887894  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:14.379482  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:16.879007  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:18.879819  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:21.378803  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:23.379684  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:25.878727  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:27.888043  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:30.379550  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:32.879063  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:35.378983  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:37.879331  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:40.378975  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:42.379655  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:44.879380  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:46.881767  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:49.379058  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:51.879190  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:54.378656  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:56.879139  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:58.879348  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:01.378669  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:03.379085  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:05.879350  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:07.879403  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:09.879447  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:12.379073  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:14.381700  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:16.879392  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:18.880717  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:21.380013  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:23.879438  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:26.378684  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:28.378972  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:30.383427  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:32.379471  216259 pod_ready.go:82] duration metric: took 4m0.006953195s for pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace to be "Ready" ...
	E1014 14:33:32.379497  216259 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 14:33:32.379508  216259 pod_ready.go:39] duration metric: took 5m27.807760732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:33:32.379522  216259 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:33:32.379549  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:32.379612  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:32.420858  216259 cri.go:89] found id: "251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:32.420881  216259 cri.go:89] found id: "a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:32.420887  216259 cri.go:89] found id: ""
	I1014 14:33:32.420895  216259 logs.go:282] 2 containers: [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a]
	I1014 14:33:32.420953  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.424649  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.428392  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:32.428468  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:32.480876  216259 cri.go:89] found id: "79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:32.480900  216259 cri.go:89] found id: "c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:32.480905  216259 cri.go:89] found id: ""
	I1014 14:33:32.480913  216259 logs.go:282] 2 containers: [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7]
	I1014 14:33:32.480974  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.484645  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.488128  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:32.488199  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:32.543204  216259 cri.go:89] found id: "5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:32.543226  216259 cri.go:89] found id: "a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:32.543231  216259 cri.go:89] found id: ""
	I1014 14:33:32.543248  216259 logs.go:282] 2 containers: [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847]
	I1014 14:33:32.543309  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.547797  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.552745  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:32.552819  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:32.593623  216259 cri.go:89] found id: "2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:32.593661  216259 cri.go:89] found id: "c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:32.593666  216259 cri.go:89] found id: ""
	I1014 14:33:32.593673  216259 logs.go:282] 2 containers: [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a]
	I1014 14:33:32.593738  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.597620  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.601447  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:32.601514  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:32.638881  216259 cri.go:89] found id: "d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:32.638904  216259 cri.go:89] found id: "2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:32.638909  216259 cri.go:89] found id: ""
	I1014 14:33:32.638917  216259 logs.go:282] 2 containers: [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2]
	I1014 14:33:32.638996  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.642428  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.645883  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:32.645957  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:32.686782  216259 cri.go:89] found id: "1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:32.686807  216259 cri.go:89] found id: "b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:32.686812  216259 cri.go:89] found id: ""
	I1014 14:33:32.686819  216259 logs.go:282] 2 containers: [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341]
	I1014 14:33:32.686878  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.690508  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.693860  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:32.693956  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:32.730043  216259 cri.go:89] found id: "1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:32.730066  216259 cri.go:89] found id: "7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:32.730072  216259 cri.go:89] found id: ""
	I1014 14:33:32.730112  216259 logs.go:282] 2 containers: [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed]
	I1014 14:33:32.730184  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.733712  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.737118  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:32.737184  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:32.792728  216259 cri.go:89] found id: "d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:32.792805  216259 cri.go:89] found id: ""
	I1014 14:33:32.792828  216259 logs.go:282] 1 containers: [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592]
	I1014 14:33:32.792920  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.798246  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:32.798401  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:32.841099  216259 cri.go:89] found id: "9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:32.841124  216259 cri.go:89] found id: "72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:32.841129  216259 cri.go:89] found id: ""
	I1014 14:33:32.841137  216259 logs.go:282] 2 containers: [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc]
	I1014 14:33:32.841217  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.844864  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.848782  216259 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:32.848824  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:32.909970  216259 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:32.910006  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:33.063452  216259 logs.go:123] Gathering logs for kube-scheduler [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85] ...
	I1014 14:33:33.063486  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:33.106049  216259 logs.go:123] Gathering logs for kube-proxy [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560] ...
	I1014 14:33:33.106082  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:33.144927  216259 logs.go:123] Gathering logs for kube-proxy [2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2] ...
	I1014 14:33:33.145001  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:33.186145  216259 logs.go:123] Gathering logs for kube-controller-manager [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0] ...
	I1014 14:33:33.186172  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:33.247100  216259 logs.go:123] Gathering logs for kube-controller-manager [b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341] ...
	I1014 14:33:33.247133  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:33.329663  216259 logs.go:123] Gathering logs for storage-provisioner [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98] ...
	I1014 14:33:33.329749  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:33.376420  216259 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:33.376514  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:33.394320  216259 logs.go:123] Gathering logs for etcd [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667] ...
	I1014 14:33:33.394350  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:33.436562  216259 logs.go:123] Gathering logs for etcd [c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7] ...
	I1014 14:33:33.436589  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:33.499492  216259 logs.go:123] Gathering logs for coredns [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900] ...
	I1014 14:33:33.499526  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:33.542632  216259 logs.go:123] Gathering logs for kube-scheduler [c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a] ...
	I1014 14:33:33.542660  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:33.586443  216259 logs.go:123] Gathering logs for kindnet [7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed] ...
	I1014 14:33:33.586475  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:33.631465  216259 logs.go:123] Gathering logs for container status ...
	I1014 14:33:33.631494  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:33.691723  216259 logs.go:123] Gathering logs for kube-apiserver [a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a] ...
	I1014 14:33:33.691754  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:33.767285  216259 logs.go:123] Gathering logs for coredns [a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847] ...
	I1014 14:33:33.767317  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:33.807954  216259 logs.go:123] Gathering logs for kindnet [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f] ...
	I1014 14:33:33.807984  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:33.859116  216259 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:33.859148  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:33.914903  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.266848     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.915110  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.684312     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.919359  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:26 old-k8s-version-805757 kubelet[663]: E1014 14:28:26.249917     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.919970  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:27 old-k8s-version-805757 kubelet[663]: E1014 14:28:27.815653     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.920299  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:28 old-k8s-version-805757 kubelet[663]: E1014 14:28:28.824018     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.920954  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:35 old-k8s-version-805757 kubelet[663]: E1014 14:28:35.818926     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.921450  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:37 old-k8s-version-805757 kubelet[663]: E1014 14:28:37.852417     663 pod_workers.go:191] Error syncing pod 98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593 ("storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"
	W1014 14:33:33.921637  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:41 old-k8s-version-805757 kubelet[663]: E1014 14:28:41.310054     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.922551  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:50 old-k8s-version-805757 kubelet[663]: E1014 14:28:50.896758     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.925140  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:53 old-k8s-version-805757 kubelet[663]: E1014 14:28:53.334247     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.925466  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:55 old-k8s-version-805757 kubelet[663]: E1014 14:28:55.818942     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.925650  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:06 old-k8s-version-805757 kubelet[663]: E1014 14:29:06.323071     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.925979  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:08 old-k8s-version-805757 kubelet[663]: E1014 14:29:08.309155     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.926162  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:18 old-k8s-version-805757 kubelet[663]: E1014 14:29:18.309720     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.926754  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:22 old-k8s-version-805757 kubelet[663]: E1014 14:29:22.029488     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.927087  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:25 old-k8s-version-805757 kubelet[663]: E1014 14:29:25.818872     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.927274  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:31 old-k8s-version-805757 kubelet[663]: E1014 14:29:31.309563     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.927604  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:37 old-k8s-version-805757 kubelet[663]: E1014 14:29:37.309788     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.930029  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:43 old-k8s-version-805757 kubelet[663]: E1014 14:29:43.326510     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.930355  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:52 old-k8s-version-805757 kubelet[663]: E1014 14:29:52.309613     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.930539  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:55 old-k8s-version-805757 kubelet[663]: E1014 14:29:55.314732     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.931123  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:05 old-k8s-version-805757 kubelet[663]: E1014 14:30:05.189312     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.931457  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:06 old-k8s-version-805757 kubelet[663]: E1014 14:30:06.193682     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.931692  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:07 old-k8s-version-805757 kubelet[663]: E1014 14:30:07.313589     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.931887  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:19 old-k8s-version-805757 kubelet[663]: E1014 14:30:19.309716     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.932213  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:20 old-k8s-version-805757 kubelet[663]: E1014 14:30:20.309049     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.932397  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:31 old-k8s-version-805757 kubelet[663]: E1014 14:30:31.309588     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.932739  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:32 old-k8s-version-805757 kubelet[663]: E1014 14:30:32.309235     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.932923  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:42 old-k8s-version-805757 kubelet[663]: E1014 14:30:42.309932     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.933269  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:44 old-k8s-version-805757 kubelet[663]: E1014 14:30:44.309221     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.933455  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:53 old-k8s-version-805757 kubelet[663]: E1014 14:30:53.316385     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.933790  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:56 old-k8s-version-805757 kubelet[663]: E1014 14:30:56.309146     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.936214  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:07 old-k8s-version-805757 kubelet[663]: E1014 14:31:07.318192     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.936546  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:09 old-k8s-version-805757 kubelet[663]: E1014 14:31:09.309131     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.936876  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.309175     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.937067  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.310110     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.937382  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.309480     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.937836  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.435860     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938165  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:35 old-k8s-version-805757 kubelet[663]: E1014 14:31:35.819352     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938348  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:47 old-k8s-version-805757 kubelet[663]: E1014 14:31:47.309512     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.938678  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:50 old-k8s-version-805757 kubelet[663]: E1014 14:31:50.309308     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938860  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:02 old-k8s-version-805757 kubelet[663]: E1014 14:32:02.309592     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.939185  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:05 old-k8s-version-805757 kubelet[663]: E1014 14:32:05.313117     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.939373  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:14 old-k8s-version-805757 kubelet[663]: E1014 14:32:14.309655     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.939697  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:16 old-k8s-version-805757 kubelet[663]: E1014 14:32:16.309258     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.939879  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:27 old-k8s-version-805757 kubelet[663]: E1014 14:32:27.309768     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.940225  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: E1014 14:32:31.309863     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.940409  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:41 old-k8s-version-805757 kubelet[663]: E1014 14:32:41.312511     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.940736  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: E1014 14:32:42.309556     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941071  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: E1014 14:32:53.310344     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941255  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:56 old-k8s-version-805757 kubelet[663]: E1014 14:32:56.309439     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.941579  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941762  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.942087  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.942269  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.942596  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	I1014 14:33:33.942606  216259 logs.go:123] Gathering logs for kube-apiserver [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75] ...
	I1014 14:33:33.942620  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:34.026421  216259 logs.go:123] Gathering logs for kubernetes-dashboard [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592] ...
	I1014 14:33:34.026454  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:34.078590  216259 logs.go:123] Gathering logs for storage-provisioner [72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc] ...
	I1014 14:33:34.078624  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:34.119813  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:34.119843  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:34.119917  216259 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1014 14:33:34.119930  216259 out.go:270]   Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	  Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:34.119938  216259 out.go:270]   Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:34.119967  216259 out.go:270]   Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	  Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:34.119983  216259 out.go:270]   Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:34.120001  216259 out.go:270]   Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	  Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	I1014 14:33:34.120016  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:34.120023  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:44.120900  216259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:33:44.133023  216259 api_server.go:72] duration metric: took 5m58.684236678s to wait for apiserver process to appear ...
	I1014 14:33:44.133085  216259 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:33:44.133120  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:44.133174  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:44.170555  216259 cri.go:89] found id: "251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:44.170580  216259 cri.go:89] found id: "a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:44.170586  216259 cri.go:89] found id: ""
	I1014 14:33:44.170594  216259 logs.go:282] 2 containers: [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a]
	I1014 14:33:44.170646  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.174119  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.177502  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:44.177578  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:44.224527  216259 cri.go:89] found id: "79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:44.224545  216259 cri.go:89] found id: "c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:44.224550  216259 cri.go:89] found id: ""
	I1014 14:33:44.224557  216259 logs.go:282] 2 containers: [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7]
	I1014 14:33:44.224612  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.228575  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.232598  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:44.232668  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:44.270635  216259 cri.go:89] found id: "5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:44.270658  216259 cri.go:89] found id: "a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:44.270663  216259 cri.go:89] found id: ""
	I1014 14:33:44.270671  216259 logs.go:282] 2 containers: [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847]
	I1014 14:33:44.270726  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.274335  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.277724  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:44.277802  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:44.317752  216259 cri.go:89] found id: "2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:44.317776  216259 cri.go:89] found id: "c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:44.317781  216259 cri.go:89] found id: ""
	I1014 14:33:44.317788  216259 logs.go:282] 2 containers: [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a]
	I1014 14:33:44.317870  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.321413  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.325175  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:44.325249  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:44.362783  216259 cri.go:89] found id: "d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:44.362817  216259 cri.go:89] found id: "2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:44.362823  216259 cri.go:89] found id: ""
	I1014 14:33:44.362830  216259 logs.go:282] 2 containers: [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2]
	I1014 14:33:44.362887  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.366408  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.370140  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:44.370214  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:44.417871  216259 cri.go:89] found id: "1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:44.417896  216259 cri.go:89] found id: "b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:44.417902  216259 cri.go:89] found id: ""
	I1014 14:33:44.417909  216259 logs.go:282] 2 containers: [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341]
	I1014 14:33:44.417994  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.421787  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.425502  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:44.425596  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:44.482599  216259 cri.go:89] found id: "1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:44.482628  216259 cri.go:89] found id: "7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:44.482634  216259 cri.go:89] found id: ""
	I1014 14:33:44.482641  216259 logs.go:282] 2 containers: [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed]
	I1014 14:33:44.482714  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.486782  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.490394  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:44.490491  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:44.528561  216259 cri.go:89] found id: "9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:44.528583  216259 cri.go:89] found id: "72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:44.528589  216259 cri.go:89] found id: ""
	I1014 14:33:44.528595  216259 logs.go:282] 2 containers: [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc]
	I1014 14:33:44.528649  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.532284  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.535798  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:44.535874  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:44.578853  216259 cri.go:89] found id: "d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:44.578877  216259 cri.go:89] found id: ""
	I1014 14:33:44.578885  216259 logs.go:282] 1 containers: [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592]
	I1014 14:33:44.578961  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.582867  216259 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:44.582892  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:44.752896  216259 logs.go:123] Gathering logs for kube-apiserver [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75] ...
	I1014 14:33:44.752928  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:44.830813  216259 logs.go:123] Gathering logs for kube-apiserver [a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a] ...
	I1014 14:33:44.830847  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:44.889290  216259 logs.go:123] Gathering logs for etcd [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667] ...
	I1014 14:33:44.889327  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:44.942768  216259 logs.go:123] Gathering logs for coredns [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900] ...
	I1014 14:33:44.942800  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:44.986308  216259 logs.go:123] Gathering logs for kube-scheduler [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85] ...
	I1014 14:33:44.986352  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:45.073595  216259 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:45.073705  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:45.198592  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.266848     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.198804  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.684312     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.203091  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:26 old-k8s-version-805757 kubelet[663]: E1014 14:28:26.249917     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.203689  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:27 old-k8s-version-805757 kubelet[663]: E1014 14:28:27.815653     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.204013  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:28 old-k8s-version-805757 kubelet[663]: E1014 14:28:28.824018     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.204666  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:35 old-k8s-version-805757 kubelet[663]: E1014 14:28:35.818926     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.205116  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:37 old-k8s-version-805757 kubelet[663]: E1014 14:28:37.852417     663 pod_workers.go:191] Error syncing pod 98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593 ("storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"
	W1014 14:33:45.205314  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:41 old-k8s-version-805757 kubelet[663]: E1014 14:28:41.310054     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.206366  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:50 old-k8s-version-805757 kubelet[663]: E1014 14:28:50.896758     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.218156  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:53 old-k8s-version-805757 kubelet[663]: E1014 14:28:53.334247     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.218513  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:55 old-k8s-version-805757 kubelet[663]: E1014 14:28:55.818942     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.218699  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:06 old-k8s-version-805757 kubelet[663]: E1014 14:29:06.323071     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.219022  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:08 old-k8s-version-805757 kubelet[663]: E1014 14:29:08.309155     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.219202  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:18 old-k8s-version-805757 kubelet[663]: E1014 14:29:18.309720     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.219791  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:22 old-k8s-version-805757 kubelet[663]: E1014 14:29:22.029488     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.220114  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:25 old-k8s-version-805757 kubelet[663]: E1014 14:29:25.818872     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.220295  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:31 old-k8s-version-805757 kubelet[663]: E1014 14:29:31.309563     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.220631  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:37 old-k8s-version-805757 kubelet[663]: E1014 14:29:37.309788     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.223090  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:43 old-k8s-version-805757 kubelet[663]: E1014 14:29:43.326510     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.223425  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:52 old-k8s-version-805757 kubelet[663]: E1014 14:29:52.309613     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.223604  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:55 old-k8s-version-805757 kubelet[663]: E1014 14:29:55.314732     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.224185  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:05 old-k8s-version-805757 kubelet[663]: E1014 14:30:05.189312     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.224511  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:06 old-k8s-version-805757 kubelet[663]: E1014 14:30:06.193682     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.224696  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:07 old-k8s-version-805757 kubelet[663]: E1014 14:30:07.313589     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.224880  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:19 old-k8s-version-805757 kubelet[663]: E1014 14:30:19.309716     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.225215  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:20 old-k8s-version-805757 kubelet[663]: E1014 14:30:20.309049     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.225399  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:31 old-k8s-version-805757 kubelet[663]: E1014 14:30:31.309588     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.225733  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:32 old-k8s-version-805757 kubelet[663]: E1014 14:30:32.309235     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.226050  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:42 old-k8s-version-805757 kubelet[663]: E1014 14:30:42.309932     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.226377  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:44 old-k8s-version-805757 kubelet[663]: E1014 14:30:44.309221     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.226560  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:53 old-k8s-version-805757 kubelet[663]: E1014 14:30:53.316385     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.226889  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:56 old-k8s-version-805757 kubelet[663]: E1014 14:30:56.309146     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.229323  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:07 old-k8s-version-805757 kubelet[663]: E1014 14:31:07.318192     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.229654  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:09 old-k8s-version-805757 kubelet[663]: E1014 14:31:09.309131     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.229981  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.309175     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.230162  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.310110     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.230471  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.309480     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.230923  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.435860     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.231247  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:35 old-k8s-version-805757 kubelet[663]: E1014 14:31:35.819352     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.236078  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:47 old-k8s-version-805757 kubelet[663]: E1014 14:31:47.309512     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.237896  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:50 old-k8s-version-805757 kubelet[663]: E1014 14:31:50.309308     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.238465  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:02 old-k8s-version-805757 kubelet[663]: E1014 14:32:02.309592     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.252418  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:05 old-k8s-version-805757 kubelet[663]: E1014 14:32:05.313117     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.252614  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:14 old-k8s-version-805757 kubelet[663]: E1014 14:32:14.309655     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.252960  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:16 old-k8s-version-805757 kubelet[663]: E1014 14:32:16.309258     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.253193  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:27 old-k8s-version-805757 kubelet[663]: E1014 14:32:27.309768     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.253523  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: E1014 14:32:31.309863     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.254681  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:41 old-k8s-version-805757 kubelet[663]: E1014 14:32:41.312511     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.255240  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: E1014 14:32:42.309556     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.270793  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: E1014 14:32:53.310344     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.271466  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:56 old-k8s-version-805757 kubelet[663]: E1014 14:32:56.309439     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.273796  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.274454  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.274969  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.275169  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.275499  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.275737  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1014 14:33:45.275746  216259 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:45.275762  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:45.298044  216259 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:45.298085  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:45.376459  216259 logs.go:123] Gathering logs for kube-controller-manager [b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341] ...
	I1014 14:33:45.376528  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:45.466492  216259 logs.go:123] Gathering logs for storage-provisioner [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98] ...
	I1014 14:33:45.466528  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:45.515454  216259 logs.go:123] Gathering logs for kube-controller-manager [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0] ...
	I1014 14:33:45.515479  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:45.569851  216259 logs.go:123] Gathering logs for kubernetes-dashboard [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592] ...
	I1014 14:33:45.569885  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:45.618928  216259 logs.go:123] Gathering logs for coredns [a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847] ...
	I1014 14:33:45.618956  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:45.664497  216259 logs.go:123] Gathering logs for kube-proxy [2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2] ...
	I1014 14:33:45.664526  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:45.705177  216259 logs.go:123] Gathering logs for kindnet [7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed] ...
	I1014 14:33:45.705207  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:45.746170  216259 logs.go:123] Gathering logs for storage-provisioner [72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc] ...
	I1014 14:33:45.746197  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:45.792090  216259 logs.go:123] Gathering logs for etcd [c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7] ...
	I1014 14:33:45.792120  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:45.834397  216259 logs.go:123] Gathering logs for kube-scheduler [c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a] ...
	I1014 14:33:45.834428  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:45.878661  216259 logs.go:123] Gathering logs for container status ...
	I1014 14:33:45.878691  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:45.959148  216259 logs.go:123] Gathering logs for kube-proxy [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560] ...
	I1014 14:33:45.959178  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:46.009192  216259 logs.go:123] Gathering logs for kindnet [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f] ...
	I1014 14:33:46.009223  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:46.062167  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:46.062197  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:46.062248  216259 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1014 14:33:46.062263  216259 out.go:270]   Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:46.062281  216259 out.go:270]   Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	  Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:46.062290  216259 out.go:270]   Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:46.062297  216259 out.go:270]   Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	  Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:46.062302  216259 out.go:270]   Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1014 14:33:46.062308  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:46.062319  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:56.063973  216259 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 14:33:56.076815  216259 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 14:33:56.083169  216259 out.go:201] 
	W1014 14:33:56.086386  216259 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1014 14:33:56.086601  216259 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1014 14:33:56.086656  216259 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1014 14:33:56.086698  216259 out.go:270] * 
	* 
	W1014 14:33:56.087684  216259 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:33:56.089991  216259 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-805757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-805757
helpers_test.go:235: (dbg) docker inspect old-k8s-version-805757:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6",
	        "Created": "2024-10-14T14:24:43.678292499Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 216549,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-14T14:27:37.058637616Z",
	            "FinishedAt": "2024-10-14T14:27:35.89847392Z"
	        },
	        "Image": "sha256:e5ca9b83e048da5ecbd9864892b13b9f06d661ec5eae41590141157c6fe62bf7",
	        "ResolvConfPath": "/var/lib/docker/containers/3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6/hostname",
	        "HostsPath": "/var/lib/docker/containers/3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6/hosts",
	        "LogPath": "/var/lib/docker/containers/3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6/3c3ddd9abb32d2e925e86e5d3b2eeafbb83d0b059cf746b56c6130742da8dba6-json.log",
	        "Name": "/old-k8s-version-805757",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-805757:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-805757",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/31a62d0ba4ecbbbbb43c4e42d5cb2f32bd2ab2c562052cac8e5fa11a538d14c2-init/diff:/var/lib/docker/overlay2/d8164b8c8c613df332ab63ecaf21de80c344b1fe32149b3955f3e5228a19c126/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31a62d0ba4ecbbbbb43c4e42d5cb2f32bd2ab2c562052cac8e5fa11a538d14c2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31a62d0ba4ecbbbbb43c4e42d5cb2f32bd2ab2c562052cac8e5fa11a538d14c2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31a62d0ba4ecbbbbb43c4e42d5cb2f32bd2ab2c562052cac8e5fa11a538d14c2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-805757",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-805757/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-805757",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-805757",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-805757",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1a750accecce1a60284739f90a02d3de1cf897bcab3af20fa2b38b71a4724e7a",
	            "SandboxKey": "/var/run/docker/netns/1a750accecce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-805757": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d22dc831c6453661466a7286df01c85f24fcf0a616a62be853f53c76015358b8",
	                    "EndpointID": "ea4518e72473dafd0b5d040db0139fe80b0e917e7fc5a5c87c36fd41632866b9",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-805757",
	                        "3c3ddd9abb32"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-805757 -n old-k8s-version-805757
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-805757 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-805757 logs -n 25: (2.037815046s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-017567 sudo find                             | cilium-017567             | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-017567 sudo crio                             | cilium-017567             | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC |                     |
	|         | config                                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-017567                                       | cilium-017567             | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	| start   | -p force-systemd-env-594800                            | force-systemd-env-594800  | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=docker                                   |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-418551                              | force-systemd-flag-418551 | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-418551                           | force-systemd-flag-418551 | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	| start   | -p cert-expiration-007181                              | cert-expiration-007181    | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:24 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-594800                               | force-systemd-env-594800  | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	|         | ssh cat                                                |                           |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-594800                            | force-systemd-env-594800  | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:23 UTC |
	| start   | -p cert-options-897597                                 | cert-options-897597       | jenkins | v1.34.0 | 14 Oct 24 14:23 UTC | 14 Oct 24 14:24 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| ssh     | cert-options-897597 ssh                                | cert-options-897597       | jenkins | v1.34.0 | 14 Oct 24 14:24 UTC | 14 Oct 24 14:24 UTC |
	|         | openssl x509 -text -noout -in                          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-897597 -- sudo                         | cert-options-897597       | jenkins | v1.34.0 | 14 Oct 24 14:24 UTC | 14 Oct 24 14:24 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                           |         |         |                     |                     |
	| delete  | -p cert-options-897597                                 | cert-options-897597       | jenkins | v1.34.0 | 14 Oct 24 14:24 UTC | 14 Oct 24 14:24 UTC |
	| start   | -p old-k8s-version-805757                              | old-k8s-version-805757    | jenkins | v1.34.0 | 14 Oct 24 14:24 UTC | 14 Oct 24 14:27 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| start   | -p cert-expiration-007181                              | cert-expiration-007181    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:27 UTC |
	|         | --memory=2048                                          |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	| delete  | -p cert-expiration-007181                              | cert-expiration-007181    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:27 UTC |
	| start   | -p no-preload-683238                                   | no-preload-683238         | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:28 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-805757        | old-k8s-version-805757    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p old-k8s-version-805757                              | old-k8s-version-805757    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:27 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-805757             | old-k8s-version-805757    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC | 14 Oct 24 14:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p old-k8s-version-805757                              | old-k8s-version-805757    | jenkins | v1.34.0 | 14 Oct 24 14:27 UTC |                     |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                           |         |         |                     |                     |
	|         | --kvm-network=default                                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                           |         |         |                     |                     |
	|         | --keep-context=false                                   |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                           |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-683238             | no-preload-683238         | jenkins | v1.34.0 | 14 Oct 24 14:28 UTC | 14 Oct 24 14:28 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                           |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                           |         |         |                     |                     |
	| stop    | -p no-preload-683238                                   | no-preload-683238         | jenkins | v1.34.0 | 14 Oct 24 14:28 UTC | 14 Oct 24 14:28 UTC |
	|         | --alsologtostderr -v=3                                 |                           |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-683238                  | no-preload-683238         | jenkins | v1.34.0 | 14 Oct 24 14:28 UTC | 14 Oct 24 14:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                           |         |         |                     |                     |
	| start   | -p no-preload-683238                                   | no-preload-683238         | jenkins | v1.34.0 | 14 Oct 24 14:28 UTC | 14 Oct 24 14:33 UTC |
	|         | --memory=2200                                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                                      |                           |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                           |         |         |                     |                     |
	|         | --driver=docker                                        |                           |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                           |         |         |                     |                     |
	|---------|--------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 14:28:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 14:28:50.379210  221374 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:28:50.379436  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:28:50.379462  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:28:50.379482  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:28:50.379765  221374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 14:28:50.380185  221374 out.go:352] Setting JSON to false
	I1014 14:28:50.382524  221374 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4282,"bootTime":1728911849,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 14:28:50.382626  221374 start.go:139] virtualization:  
	I1014 14:28:50.386613  221374 out.go:177] * [no-preload-683238] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 14:28:50.388717  221374 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:28:50.388787  221374 notify.go:220] Checking for updates...
	I1014 14:28:50.393371  221374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:28:50.395291  221374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:28:50.397187  221374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 14:28:50.398670  221374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 14:28:50.400388  221374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:28:50.402974  221374 config.go:182] Loaded profile config "no-preload-683238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:28:50.403505  221374 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:28:50.435194  221374 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 14:28:50.435329  221374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:28:50.537642  221374 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 14:28:50.520819373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:28:50.537756  221374 docker.go:318] overlay module found
	I1014 14:28:50.539950  221374 out.go:177] * Using the docker driver based on existing profile
	I1014 14:28:50.542467  221374 start.go:297] selected driver: docker
	I1014 14:28:50.542489  221374 start.go:901] validating driver "docker" against &{Name:no-preload-683238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-683238 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:28:50.542639  221374 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:28:50.543323  221374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:28:50.607103  221374 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 14:28:50.597647868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:28:50.607482  221374 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:28:50.607511  221374 cni.go:84] Creating CNI manager for ""
	I1014 14:28:50.607555  221374 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 14:28:50.607596  221374 start.go:340] cluster config:
	{Name:no-preload-683238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-683238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:28:50.610767  221374 out.go:177] * Starting "no-preload-683238" primary control-plane node in "no-preload-683238" cluster
	I1014 14:28:50.612290  221374 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1014 14:28:50.614093  221374 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1014 14:28:50.615568  221374 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 14:28:50.615630  221374 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 14:28:50.615712  221374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/config.json ...
	I1014 14:28:50.616007  221374 cache.go:107] acquiring lock: {Name:mk0fc59a8e36761706e8da939386722b93ff7432 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616093  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1014 14:28:50.616107  221374 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 104.911µs
	I1014 14:28:50.616119  221374 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1014 14:28:50.616134  221374 cache.go:107] acquiring lock: {Name:mk8337c6935bb51fd987653e266eb9d37ef5fdfb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616168  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1014 14:28:50.616176  221374 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 43.701µs
	I1014 14:28:50.616182  221374 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1014 14:28:50.616209  221374 cache.go:107] acquiring lock: {Name:mk4ff6b5d48b9473572e8607cf4111408c80e5b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616243  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1014 14:28:50.616254  221374 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 60.456µs
	I1014 14:28:50.616260  221374 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1014 14:28:50.616274  221374 cache.go:107] acquiring lock: {Name:mkdd135ba6fc2ad1ed63761156405afca1c3a605 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616304  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1014 14:28:50.616309  221374 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 36.685µs
	I1014 14:28:50.616314  221374 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1014 14:28:50.616347  221374 cache.go:107] acquiring lock: {Name:mk9ee0261d2b83c545edacf079863f8e592d0808 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616381  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1014 14:28:50.616392  221374 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 46.491µs
	I1014 14:28:50.616398  221374 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1014 14:28:50.616407  221374 cache.go:107] acquiring lock: {Name:mk3ac83fc5d572e6108ede10923d1acee318381f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616437  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1014 14:28:50.616442  221374 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 36.448µs
	I1014 14:28:50.616451  221374 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1014 14:28:50.616460  221374 cache.go:107] acquiring lock: {Name:mk527abb9f064e36ef8ed0c47e89003c9474bacb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616489  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1014 14:28:50.616496  221374 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 37.342µs
	I1014 14:28:50.616501  221374 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1014 14:28:50.616512  221374 cache.go:107] acquiring lock: {Name:mk0715e4d4bb8d8d11bcd6ca601bb88b3033a160 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.616545  221374 cache.go:115] /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1014 14:28:50.616554  221374 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 42.249µs
	I1014 14:28:50.616559  221374 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1014 14:28:50.616565  221374 cache.go:87] Successfully saved all images to host disk.
	I1014 14:28:50.635931  221374 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon, skipping pull
	I1014 14:28:50.635954  221374 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in daemon, skipping load
	I1014 14:28:50.635976  221374 cache.go:194] Successfully downloaded all kic artifacts
	I1014 14:28:50.636005  221374 start.go:360] acquireMachinesLock for no-preload-683238: {Name:mkcd47f9f207cb31d7581d281d9c84c5ca97b4da Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1014 14:28:50.636058  221374 start.go:364] duration metric: took 36.25µs to acquireMachinesLock for "no-preload-683238"
	I1014 14:28:50.636083  221374 start.go:96] Skipping create...Using existing machine configuration
	I1014 14:28:50.636089  221374 fix.go:54] fixHost starting: 
	I1014 14:28:50.636360  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:50.653733  221374 fix.go:112] recreateIfNeeded on no-preload-683238: state=Stopped err=<nil>
	W1014 14:28:50.653765  221374 fix.go:138] unexpected machine state, will restart: <nil>
	I1014 14:28:50.657013  221374 out.go:177] * Restarting existing docker container for "no-preload-683238" ...
	I1014 14:28:46.862318  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:49.362381  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:50.658749  221374 cli_runner.go:164] Run: docker start no-preload-683238
	I1014 14:28:51.024358  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:51.047961  221374 kic.go:430] container "no-preload-683238" state is running.
	I1014 14:28:51.048400  221374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-683238
	I1014 14:28:51.082652  221374 profile.go:143] Saving config to /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/config.json ...
	I1014 14:28:51.082890  221374 machine.go:93] provisionDockerMachine start ...
	I1014 14:28:51.082952  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:51.107792  221374 main.go:141] libmachine: Using SSH client type: native
	I1014 14:28:51.108081  221374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1014 14:28:51.108098  221374 main.go:141] libmachine: About to run SSH command:
	hostname
	I1014 14:28:51.108755  221374 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39524->127.0.0.1:33068: read: connection reset by peer
	I1014 14:28:54.241834  221374 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-683238
	
	I1014 14:28:54.241875  221374 ubuntu.go:169] provisioning hostname "no-preload-683238"
	I1014 14:28:54.242133  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:54.262900  221374 main.go:141] libmachine: Using SSH client type: native
	I1014 14:28:54.263132  221374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1014 14:28:54.263144  221374 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-683238 && echo "no-preload-683238" | sudo tee /etc/hostname
	I1014 14:28:54.418584  221374 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-683238
	
	I1014 14:28:54.418686  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:54.442123  221374 main.go:141] libmachine: Using SSH client type: native
	I1014 14:28:54.442379  221374 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413e90] 0x4166d0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1014 14:28:54.442405  221374 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-683238' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-683238/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-683238' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1014 14:28:54.577196  221374 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1014 14:28:54.577228  221374 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19790-2229/.minikube CaCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19790-2229/.minikube}
	I1014 14:28:54.577247  221374 ubuntu.go:177] setting up certificates
	I1014 14:28:54.577257  221374 provision.go:84] configureAuth start
	I1014 14:28:54.577316  221374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-683238
	I1014 14:28:54.596241  221374 provision.go:143] copyHostCerts
	I1014 14:28:54.596315  221374 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem, removing ...
	I1014 14:28:54.596332  221374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem
	I1014 14:28:54.596411  221374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/ca.pem (1082 bytes)
	I1014 14:28:54.596528  221374 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem, removing ...
	I1014 14:28:54.596539  221374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem
	I1014 14:28:54.596571  221374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/cert.pem (1123 bytes)
	I1014 14:28:54.596632  221374 exec_runner.go:144] found /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem, removing ...
	I1014 14:28:54.596642  221374 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem
	I1014 14:28:54.596667  221374 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19790-2229/.minikube/key.pem (1679 bytes)
	I1014 14:28:54.596724  221374 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem org=jenkins.no-preload-683238 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-683238]
	I1014 14:28:51.863570  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:54.361591  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:56.362014  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:28:55.526337  221374 provision.go:177] copyRemoteCerts
	I1014 14:28:55.526443  221374 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1014 14:28:55.526503  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:55.544521  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:55.638768  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1014 14:28:55.665114  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1014 14:28:55.694263  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1014 14:28:55.719619  221374 provision.go:87] duration metric: took 1.142348555s to configureAuth
	I1014 14:28:55.719649  221374 ubuntu.go:193] setting minikube options for container-runtime
	I1014 14:28:55.719840  221374 config.go:182] Loaded profile config "no-preload-683238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:28:55.719856  221374 machine.go:96] duration metric: took 4.636957015s to provisionDockerMachine
	I1014 14:28:55.719864  221374 start.go:293] postStartSetup for "no-preload-683238" (driver="docker")
	I1014 14:28:55.719874  221374 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1014 14:28:55.719929  221374 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1014 14:28:55.719978  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:55.737443  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:55.837798  221374 ssh_runner.go:195] Run: cat /etc/os-release
	I1014 14:28:55.841512  221374 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1014 14:28:55.841553  221374 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1014 14:28:55.841565  221374 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1014 14:28:55.841572  221374 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1014 14:28:55.841588  221374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/addons for local assets ...
	I1014 14:28:55.841647  221374 filesync.go:126] Scanning /home/jenkins/minikube-integration/19790-2229/.minikube/files for local assets ...
	I1014 14:28:55.841737  221374 filesync.go:149] local asset: /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem -> 75422.pem in /etc/ssl/certs
	I1014 14:28:55.841841  221374 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1014 14:28:55.851191  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem --> /etc/ssl/certs/75422.pem (1708 bytes)
	I1014 14:28:55.877618  221374 start.go:296] duration metric: took 157.739198ms for postStartSetup
	I1014 14:28:55.877704  221374 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:28:55.877748  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:55.894456  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:55.986343  221374 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1014 14:28:55.990818  221374 fix.go:56] duration metric: took 5.354721043s for fixHost
	I1014 14:28:55.990845  221374 start.go:83] releasing machines lock for "no-preload-683238", held for 5.354772343s
	I1014 14:28:55.990912  221374 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-683238
	I1014 14:28:56.011943  221374 ssh_runner.go:195] Run: cat /version.json
	I1014 14:28:56.012000  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:56.012241  221374 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1014 14:28:56.012316  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:56.042883  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:56.050570  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:56.276337  221374 ssh_runner.go:195] Run: systemctl --version
	I1014 14:28:56.280822  221374 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1014 14:28:56.285258  221374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I1014 14:28:56.302236  221374 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I1014 14:28:56.302354  221374 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1014 14:28:56.311210  221374 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1014 14:28:56.311276  221374 start.go:495] detecting cgroup driver to use...
	I1014 14:28:56.311324  221374 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1014 14:28:56.311410  221374 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1014 14:28:56.324934  221374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1014 14:28:56.337115  221374 docker.go:217] disabling cri-docker service (if available) ...
	I1014 14:28:56.337273  221374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1014 14:28:56.350254  221374 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1014 14:28:56.365381  221374 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1014 14:28:56.448613  221374 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1014 14:28:56.543167  221374 docker.go:233] disabling docker service ...
	I1014 14:28:56.543278  221374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1014 14:28:56.556076  221374 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1014 14:28:56.567743  221374 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1014 14:28:56.657615  221374 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1014 14:28:56.752156  221374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1014 14:28:56.764595  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1014 14:28:56.782142  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I1014 14:28:56.793152  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1014 14:28:56.802933  221374 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1014 14:28:56.803043  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1014 14:28:56.812437  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 14:28:56.821637  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1014 14:28:56.831031  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1014 14:28:56.840501  221374 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1014 14:28:56.849928  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1014 14:28:56.860861  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1014 14:28:56.870864  221374 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1014 14:28:56.882139  221374 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1014 14:28:56.891319  221374 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1014 14:28:56.899804  221374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:28:56.991760  221374 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1014 14:28:57.157235  221374 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I1014 14:28:57.157302  221374 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1014 14:28:57.161120  221374 start.go:563] Will wait 60s for crictl version
	I1014 14:28:57.161183  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:28:57.164649  221374 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1014 14:28:57.207635  221374 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I1014 14:28:57.207710  221374 ssh_runner.go:195] Run: containerd --version
	I1014 14:28:57.236583  221374 ssh_runner.go:195] Run: containerd --version
	I1014 14:28:57.263457  221374 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I1014 14:28:57.265445  221374 cli_runner.go:164] Run: docker network inspect no-preload-683238 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1014 14:28:57.281316  221374 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1014 14:28:57.285981  221374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:28:57.297931  221374 kubeadm.go:883] updating cluster {Name:no-preload-683238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-683238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1014 14:28:57.298052  221374 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 14:28:57.298104  221374 ssh_runner.go:195] Run: sudo crictl images --output json
	I1014 14:28:57.342175  221374 containerd.go:627] all images are preloaded for containerd runtime.
	I1014 14:28:57.342200  221374 cache_images.go:84] Images are preloaded, skipping loading
	I1014 14:28:57.342208  221374 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.31.1 containerd true true} ...
	I1014 14:28:57.342327  221374 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-683238 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-683238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1014 14:28:57.342396  221374 ssh_runner.go:195] Run: sudo crictl info
	I1014 14:28:57.392483  221374 cni.go:84] Creating CNI manager for ""
	I1014 14:28:57.392533  221374 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 14:28:57.392544  221374 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1014 14:28:57.392567  221374 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-683238 NodeName:no-preload-683238 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1014 14:28:57.392687  221374 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-683238"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1014 14:28:57.392757  221374 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1014 14:28:57.403337  221374 binaries.go:44] Found k8s binaries, skipping transfer
	I1014 14:28:57.403412  221374 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1014 14:28:57.412266  221374 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1014 14:28:57.433440  221374 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1014 14:28:57.461496  221374 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2307 bytes)
	I1014 14:28:57.480844  221374 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1014 14:28:57.484742  221374 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1014 14:28:57.496472  221374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:28:57.591659  221374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:28:57.607657  221374 certs.go:68] Setting up /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238 for IP: 192.168.76.2
	I1014 14:28:57.607681  221374 certs.go:194] generating shared ca certs ...
	I1014 14:28:57.607698  221374 certs.go:226] acquiring lock for ca certs: {Name:mk2a77364a9bb2b8250d1aa5761db5ebc543c9b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:28:57.607856  221374 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key
	I1014 14:28:57.607916  221374 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key
	I1014 14:28:57.607930  221374 certs.go:256] generating profile certs ...
	I1014 14:28:57.608030  221374 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.key
	I1014 14:28:57.608086  221374 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/apiserver.key.68264c57
	I1014 14:28:57.608137  221374 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/proxy-client.key
	I1014 14:28:57.608256  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542.pem (1338 bytes)
	W1014 14:28:57.608301  221374 certs.go:480] ignoring /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542_empty.pem, impossibly tiny 0 bytes
	I1014 14:28:57.608315  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca-key.pem (1675 bytes)
	I1014 14:28:57.608343  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/ca.pem (1082 bytes)
	I1014 14:28:57.608369  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/cert.pem (1123 bytes)
	I1014 14:28:57.608393  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/certs/key.pem (1679 bytes)
	I1014 14:28:57.608440  221374 certs.go:484] found cert: /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem (1708 bytes)
	I1014 14:28:57.609174  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1014 14:28:57.644542  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1014 14:28:57.671646  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1014 14:28:57.704281  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1014 14:28:57.730672  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1014 14:28:57.762260  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1014 14:28:57.799093  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1014 14:28:57.826859  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1014 14:28:57.858044  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/certs/7542.pem --> /usr/share/ca-certificates/7542.pem (1338 bytes)
	I1014 14:28:57.894215  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/ssl/certs/75422.pem --> /usr/share/ca-certificates/75422.pem (1708 bytes)
	I1014 14:28:57.921555  221374 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19790-2229/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1014 14:28:57.952102  221374 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1014 14:28:57.970613  221374 ssh_runner.go:195] Run: openssl version
	I1014 14:28:57.978606  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/75422.pem && ln -fs /usr/share/ca-certificates/75422.pem /etc/ssl/certs/75422.pem"
	I1014 14:28:57.989015  221374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/75422.pem
	I1014 14:28:57.993492  221374 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 14 13:48 /usr/share/ca-certificates/75422.pem
	I1014 14:28:57.993580  221374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/75422.pem
	I1014 14:28:58.003911  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/75422.pem /etc/ssl/certs/3ec20f2e.0"
	I1014 14:28:58.014335  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1014 14:28:58.025165  221374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:28:58.029187  221374 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 14 13:39 /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:28:58.029289  221374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1014 14:28:58.037015  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1014 14:28:58.046845  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7542.pem && ln -fs /usr/share/ca-certificates/7542.pem /etc/ssl/certs/7542.pem"
	I1014 14:28:58.056876  221374 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7542.pem
	I1014 14:28:58.060750  221374 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 14 13:48 /usr/share/ca-certificates/7542.pem
	I1014 14:28:58.060829  221374 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7542.pem
	I1014 14:28:58.068326  221374 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7542.pem /etc/ssl/certs/51391683.0"
	I1014 14:28:58.078538  221374 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1014 14:28:58.082367  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1014 14:28:58.089895  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1014 14:28:58.097543  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1014 14:28:58.104565  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1014 14:28:58.112213  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1014 14:28:58.119085  221374 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1014 14:28:58.125953  221374 kubeadm.go:392] StartCluster: {Name:no-preload-683238 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-683238 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 14:28:58.126067  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1014 14:28:58.126148  221374 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1014 14:28:58.167194  221374 cri.go:89] found id: "c8fd96d6b7cb5ae020fff89753a025d489be5671122cce90772b022fd3eccc13"
	I1014 14:28:58.167219  221374 cri.go:89] found id: "6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:28:58.167224  221374 cri.go:89] found id: "74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:28:58.167235  221374 cri.go:89] found id: "4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:28:58.167239  221374 cri.go:89] found id: "d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:28:58.167243  221374 cri.go:89] found id: "f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:28:58.167246  221374 cri.go:89] found id: "b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:28:58.167249  221374 cri.go:89] found id: "a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:28:58.167252  221374 cri.go:89] found id: ""
	I1014 14:28:58.167303  221374 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1014 14:28:58.179912  221374 cri.go:116] JSON = null
	W1014 14:28:58.180008  221374 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I1014 14:28:58.180095  221374 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1014 14:28:58.194804  221374 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1014 14:28:58.194886  221374 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1014 14:28:58.194963  221374 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1014 14:28:58.207494  221374 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1014 14:28:58.208186  221374 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-683238" does not appear in /home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:28:58.208491  221374 kubeconfig.go:62] /home/jenkins/minikube-integration/19790-2229/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-683238" cluster setting kubeconfig missing "no-preload-683238" context setting]
	I1014 14:28:58.209499  221374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/kubeconfig: {Name:mk7703bee112acb0d700fbfe8aa7245ea0dd07d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:28:58.210903  221374 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1014 14:28:58.222841  221374 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I1014 14:28:58.222876  221374 kubeadm.go:597] duration metric: took 27.968548ms to restartPrimaryControlPlane
	I1014 14:28:58.222885  221374 kubeadm.go:394] duration metric: took 96.942366ms to StartCluster
	I1014 14:28:58.222902  221374 settings.go:142] acquiring lock: {Name:mk7dda8238a0606dcfbe3db5d257a14d7d308979 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:28:58.222958  221374 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:28:58.223918  221374 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19790-2229/kubeconfig: {Name:mk7703bee112acb0d700fbfe8aa7245ea0dd07d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1014 14:28:58.224110  221374 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1014 14:28:58.224411  221374 config.go:182] Loaded profile config "no-preload-683238": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:28:58.224457  221374 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1014 14:28:58.224582  221374 addons.go:69] Setting storage-provisioner=true in profile "no-preload-683238"
	I1014 14:28:58.224602  221374 addons.go:234] Setting addon storage-provisioner=true in "no-preload-683238"
	W1014 14:28:58.224610  221374 addons.go:243] addon storage-provisioner should already be in state true
	I1014 14:28:58.224632  221374 host.go:66] Checking if "no-preload-683238" exists ...
	I1014 14:28:58.224634  221374 addons.go:69] Setting default-storageclass=true in profile "no-preload-683238"
	I1014 14:28:58.224680  221374 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-683238"
	I1014 14:28:58.225113  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:58.225408  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:58.225856  221374 addons.go:69] Setting dashboard=true in profile "no-preload-683238"
	I1014 14:28:58.225879  221374 addons.go:234] Setting addon dashboard=true in "no-preload-683238"
	W1014 14:28:58.225886  221374 addons.go:243] addon dashboard should already be in state true
	I1014 14:28:58.225909  221374 host.go:66] Checking if "no-preload-683238" exists ...
	I1014 14:28:58.226312  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:58.236593  221374 out.go:177] * Verifying Kubernetes components...
	I1014 14:28:58.236769  221374 addons.go:69] Setting metrics-server=true in profile "no-preload-683238"
	I1014 14:28:58.236810  221374 addons.go:234] Setting addon metrics-server=true in "no-preload-683238"
	W1014 14:28:58.236833  221374 addons.go:243] addon metrics-server should already be in state true
	I1014 14:28:58.236880  221374 host.go:66] Checking if "no-preload-683238" exists ...
	I1014 14:28:58.237697  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:58.239029  221374 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1014 14:28:58.275448  221374 addons.go:234] Setting addon default-storageclass=true in "no-preload-683238"
	W1014 14:28:58.275468  221374 addons.go:243] addon default-storageclass should already be in state true
	I1014 14:28:58.275494  221374 host.go:66] Checking if "no-preload-683238" exists ...
	I1014 14:28:58.275891  221374 cli_runner.go:164] Run: docker container inspect no-preload-683238 --format={{.State.Status}}
	I1014 14:28:58.305109  221374 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1014 14:28:58.305235  221374 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1014 14:28:58.306773  221374 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I1014 14:28:58.307077  221374 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:28:58.307103  221374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1014 14:28:58.307164  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:58.308725  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1014 14:28:58.308751  221374 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1014 14:28:58.308804  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:58.322493  221374 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1014 14:28:58.324203  221374 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1014 14:28:58.324227  221374 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1014 14:28:58.324345  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:58.364329  221374 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1014 14:28:58.364349  221374 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1014 14:28:58.364414  221374 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-683238
	I1014 14:28:58.365038  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:58.397772  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:58.406573  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:58.417742  221374 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/no-preload-683238/id_rsa Username:docker}
	I1014 14:28:58.440536  221374 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1014 14:28:58.473269  221374 node_ready.go:35] waiting up to 6m0s for node "no-preload-683238" to be "Ready" ...
	I1014 14:28:58.622876  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:28:58.646549  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1014 14:28:58.646575  221374 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1014 14:28:58.679692  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:28:58.715004  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1014 14:28:58.715032  221374 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1014 14:28:58.738243  221374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1014 14:28:58.738268  221374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1014 14:28:58.785823  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1014 14:28:58.785850  221374 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1014 14:28:58.898508  221374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1014 14:28:58.898536  221374 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1014 14:28:58.998440  221374 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:28:58.998487  221374 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W1014 14:28:59.008153  221374 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 14:28:59.008281  221374 retry.go:31] will retry after 251.745999ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 14:28:59.016325  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1014 14:28:59.016351  221374 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W1014 14:28:59.102975  221374 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 14:28:59.103008  221374 retry.go:31] will retry after 255.901918ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1014 14:28:59.151577  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1014 14:28:59.202679  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1014 14:28:59.202749  221374 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1014 14:28:59.260511  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1014 14:28:59.359544  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1014 14:28:59.380835  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1014 14:28:59.380909  221374 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1014 14:28:59.515734  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1014 14:28:59.515807  221374 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1014 14:28:59.667658  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1014 14:28:59.667741  221374 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1014 14:28:59.776136  221374 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:28:59.776206  221374 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1014 14:28:59.826671  221374 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1014 14:28:58.389723  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:00.862532  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:03.668936  221374 node_ready.go:49] node "no-preload-683238" has status "Ready":"True"
	I1014 14:29:03.668965  221374 node_ready.go:38] duration metric: took 5.195658196s for node "no-preload-683238" to be "Ready" ...
	I1014 14:29:03.668976  221374 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:29:03.731386  221374 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-f6xml" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:03.949461  221374 pod_ready.go:93] pod "coredns-7c65d6cfc9-f6xml" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:03.949487  221374 pod_ready.go:82] duration metric: took 218.063081ms for pod "coredns-7c65d6cfc9-f6xml" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:03.949498  221374 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.052988  221374 pod_ready.go:93] pod "etcd-no-preload-683238" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:04.053019  221374 pod_ready.go:82] duration metric: took 103.513282ms for pod "etcd-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.053035  221374 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.075290  221374 pod_ready.go:93] pod "kube-apiserver-no-preload-683238" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:04.075316  221374 pod_ready.go:82] duration metric: took 22.272795ms for pod "kube-apiserver-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.075329  221374 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.099962  221374 pod_ready.go:93] pod "kube-controller-manager-no-preload-683238" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:04.099995  221374 pod_ready.go:82] duration metric: took 24.658167ms for pod "kube-controller-manager-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.100007  221374 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-lkxpz" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.126821  221374 pod_ready.go:93] pod "kube-proxy-lkxpz" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:04.126860  221374 pod_ready.go:82] duration metric: took 26.845696ms for pod "kube-proxy-lkxpz" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:04.126872  221374 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:03.362440  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:05.362777  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:06.145596  221374 pod_ready.go:103] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:07.701339  221374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.54970079s)
	I1014 14:29:07.701558  221374 addons.go:475] Verifying addon metrics-server=true in "no-preload-683238"
	I1014 14:29:07.701454  221374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.440867831s)
	I1014 14:29:07.701492  221374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.341877494s)
	I1014 14:29:07.897811  221374 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.071021624s)
	I1014 14:29:07.900116  221374 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-683238 addons enable metrics-server
	
	I1014 14:29:07.902589  221374 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass, dashboard
	I1014 14:29:07.904404  221374 addons.go:510] duration metric: took 9.679941364s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass dashboard]
	I1014 14:29:08.634664  221374 pod_ready.go:103] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:07.364010  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:09.368100  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:11.136730  221374 pod_ready.go:103] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:13.632708  221374 pod_ready.go:103] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:11.862636  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:13.864720  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:16.362492  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:15.633572  221374 pod_ready.go:103] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:17.632997  221374 pod_ready.go:93] pod "kube-scheduler-no-preload-683238" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:17.633022  221374 pod_ready.go:82] duration metric: took 13.506142087s for pod "kube-scheduler-no-preload-683238" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:17.633043  221374 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:19.638611  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:18.862570  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:21.363558  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:21.640040  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:24.139869  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:23.862380  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:26.362144  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:26.140719  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:28.638326  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:28.862409  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:31.362871  216259 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:30.640899  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:33.140078  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:32.362216  216259 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.362286  216259 pod_ready.go:82] duration metric: took 1m27.00682947s for pod "kube-controller-manager-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.362302  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-nj7wx" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.367368  216259 pod_ready.go:93] pod "kube-proxy-nj7wx" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.367392  216259 pod_ready.go:82] duration metric: took 5.08207ms for pod "kube-proxy-nj7wx" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.367405  216259 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.372466  216259 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace has status "Ready":"True"
	I1014 14:29:32.372491  216259 pod_ready.go:82] duration metric: took 5.077926ms for pod "kube-scheduler-old-k8s-version-805757" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:32.372504  216259 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace to be "Ready" ...
	I1014 14:29:34.379365  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:35.640293  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:38.138941  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:40.139982  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:36.879386  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:38.884745  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:41.378829  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:42.142038  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:44.640334  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:43.379493  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:45.419448  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:47.140105  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:49.639953  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:47.879361  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:49.879585  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:52.138849  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:54.139315  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:52.378918  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:54.379170  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:56.139376  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:58.141502  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:00.193949  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:56.879120  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:29:58.879718  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:00.881010  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:02.639938  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:05.140011  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:03.379169  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:05.380331  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:07.140308  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:09.639811  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:07.879010  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:09.879270  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:12.139076  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:14.140160  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:11.885034  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:14.379451  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:16.140206  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:18.639836  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:16.879390  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:19.379693  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:21.382557  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:21.139795  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:23.140387  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:23.878999  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:26.379088  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:25.642937  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:28.138493  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:30.141286  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:28.878749  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:31.378849  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:32.639772  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:35.140389  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:33.378999  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:35.379374  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:37.638985  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:39.639436  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:37.379992  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:39.879036  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:41.639850  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:44.139293  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:41.879092  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:44.378523  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:46.379123  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:46.139355  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:48.639154  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:48.879247  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:50.880513  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:50.639518  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:53.139823  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:55.141385  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:53.378959  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:55.379151  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:57.640246  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:00.164498  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:57.878997  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:30:59.879145  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:02.639975  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:05.145626  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:01.882814  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:04.379368  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:06.386347  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:07.639243  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:10.141388  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:08.878519  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:10.878795  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:12.638366  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:14.638900  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:12.879028  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:14.879477  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:16.639268  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:19.139599  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:17.378629  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:19.379260  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:21.639832  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:24.139937  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:21.879187  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:23.886598  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:26.379060  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:26.639109  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:29.139322  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:28.879813  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:30.890781  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:31.641391  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:34.140020  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:33.380293  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:35.880961  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:36.638949  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:38.639059  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:38.379273  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:40.379418  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:40.639158  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:43.138724  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:45.159879  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:42.379580  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:44.879477  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:47.639267  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:49.639700  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:47.378646  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:49.379027  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:52.139063  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:54.139493  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:51.879180  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:53.879232  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:56.379016  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:56.139568  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:58.638908  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:31:58.379324  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:00.421269  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:00.640524  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:03.140635  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:02.879330  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:05.379215  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:05.639476  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:08.140070  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:07.878919  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:09.879783  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:10.639411  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:12.639628  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:15.139638  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:11.887894  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:14.379482  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:17.639176  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:19.639736  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:16.879007  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:18.879819  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:21.378803  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:22.139976  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:24.638927  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:23.379684  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:25.878727  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:26.639400  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:29.139763  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:27.888043  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:30.379550  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:31.141234  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:33.639121  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:32.879063  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:35.378983  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:36.139772  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:38.638865  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:37.879331  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:40.378975  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:40.639522  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:42.639831  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:45.146715  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:42.379655  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:44.879380  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:47.639834  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:50.139928  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:46.881767  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:49.379058  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:52.638844  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:54.639040  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:51.879190  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:54.378656  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:57.139576  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:59.639144  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:56.879139  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:32:58.879348  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:01.378669  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:02.139167  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:04.139294  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:03.379085  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:05.879350  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:06.140070  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:08.638845  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:07.879403  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:09.879447  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:10.638878  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:12.639045  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:14.639377  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:12.379073  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:14.381700  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:17.139160  221374 pod_ready.go:103] pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:17.639663  221374 pod_ready.go:82] duration metric: took 4m0.006604371s for pod "metrics-server-6867b74b74-95gsh" in "kube-system" namespace to be "Ready" ...
	E1014 14:33:17.639688  221374 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 14:33:17.639698  221374 pod_ready.go:39] duration metric: took 4m13.970711622s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:33:17.639712  221374 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:33:17.639743  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:17.639804  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:17.678892  221374 cri.go:89] found id: "e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:17.678916  221374 cri.go:89] found id: "f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:17.678922  221374 cri.go:89] found id: ""
	I1014 14:33:17.678929  221374 logs.go:282] 2 containers: [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394]
	I1014 14:33:17.678987  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.682690  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.686169  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:17.686242  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:17.731027  221374 cri.go:89] found id: "7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:17.731051  221374 cri.go:89] found id: "a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:17.731056  221374 cri.go:89] found id: ""
	I1014 14:33:17.731064  221374 logs.go:282] 2 containers: [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1]
	I1014 14:33:17.731127  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.735588  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.740370  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:17.740497  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:17.783532  221374 cri.go:89] found id: "39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:17.783596  221374 cri.go:89] found id: "6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:17.783617  221374 cri.go:89] found id: ""
	I1014 14:33:17.783642  221374 logs.go:282] 2 containers: [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e]
	I1014 14:33:17.783705  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.787416  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.790898  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:17.790977  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:17.826911  221374 cri.go:89] found id: "2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:17.826935  221374 cri.go:89] found id: "b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:17.826941  221374 cri.go:89] found id: ""
	I1014 14:33:17.826949  221374 logs.go:282] 2 containers: [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036]
	I1014 14:33:17.827006  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.830895  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.834537  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:17.834607  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:17.883440  221374 cri.go:89] found id: "3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:17.883461  221374 cri.go:89] found id: "4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:17.883467  221374 cri.go:89] found id: ""
	I1014 14:33:17.883474  221374 logs.go:282] 2 containers: [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea]
	I1014 14:33:17.883534  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.887300  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.892946  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:17.893017  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:17.931887  221374 cri.go:89] found id: "2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:17.931910  221374 cri.go:89] found id: "d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:17.931918  221374 cri.go:89] found id: ""
	I1014 14:33:17.931926  221374 logs.go:282] 2 containers: [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a]
	I1014 14:33:17.932002  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.935839  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.939298  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:17.939428  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:17.977116  221374 cri.go:89] found id: "f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:17.977139  221374 cri.go:89] found id: "74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:17.977145  221374 cri.go:89] found id: ""
	I1014 14:33:17.977153  221374 logs.go:282] 2 containers: [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe]
	I1014 14:33:17.977208  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.981043  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:17.984513  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:17.984630  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:18.041836  221374 cri.go:89] found id: "20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:18.041913  221374 cri.go:89] found id: ""
	I1014 14:33:18.041936  221374 logs.go:282] 1 containers: [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1]
	I1014 14:33:18.042016  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:18.046483  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:18.046593  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:18.090019  221374 cri.go:89] found id: "56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:18.090043  221374 cri.go:89] found id: "8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:18.090049  221374 cri.go:89] found id: ""
	I1014 14:33:18.090056  221374 logs.go:282] 2 containers: [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de]
	I1014 14:33:18.090142  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:18.093719  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:18.097249  221374 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:18.097274  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:18.143515  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:18.143770  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:18.175761  221374 logs.go:123] Gathering logs for coredns [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600] ...
	I1014 14:33:18.175798  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:18.223072  221374 logs.go:123] Gathering logs for kube-proxy [4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea] ...
	I1014 14:33:18.223105  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:18.262926  221374 logs.go:123] Gathering logs for kube-controller-manager [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78] ...
	I1014 14:33:18.262953  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:18.338307  221374 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:18.338382  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:18.414613  221374 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:18.414695  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:18.435665  221374 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:18.435763  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:18.601219  221374 logs.go:123] Gathering logs for etcd [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42] ...
	I1014 14:33:18.601252  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:18.655699  221374 logs.go:123] Gathering logs for coredns [6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e] ...
	I1014 14:33:18.655729  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:18.711450  221374 logs.go:123] Gathering logs for kube-scheduler [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45] ...
	I1014 14:33:18.711479  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:18.754151  221374 logs.go:123] Gathering logs for storage-provisioner [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2] ...
	I1014 14:33:18.754191  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:18.794894  221374 logs.go:123] Gathering logs for storage-provisioner [8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de] ...
	I1014 14:33:18.794933  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:18.843628  221374 logs.go:123] Gathering logs for kube-scheduler [b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036] ...
	I1014 14:33:18.843655  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:18.905337  221374 logs.go:123] Gathering logs for kube-proxy [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42] ...
	I1014 14:33:18.905392  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:18.948249  221374 logs.go:123] Gathering logs for kindnet [74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe] ...
	I1014 14:33:18.948277  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:18.993466  221374 logs.go:123] Gathering logs for kubernetes-dashboard [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1] ...
	I1014 14:33:18.993494  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:19.045116  221374 logs.go:123] Gathering logs for kube-apiserver [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad] ...
	I1014 14:33:19.045145  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:19.099969  221374 logs.go:123] Gathering logs for kube-apiserver [f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394] ...
	I1014 14:33:19.100001  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:19.150028  221374 logs.go:123] Gathering logs for etcd [a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1] ...
	I1014 14:33:19.150061  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:19.199503  221374 logs.go:123] Gathering logs for kube-controller-manager [d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a] ...
	I1014 14:33:19.199535  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:19.265665  221374 logs.go:123] Gathering logs for kindnet [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b] ...
	I1014 14:33:19.265701  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:19.313981  221374 logs.go:123] Gathering logs for container status ...
	I1014 14:33:19.314013  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:19.359990  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:19.360021  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:19.360100  221374 out.go:270] X Problems detected in kubelet:
	W1014 14:33:19.360112  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:19.360120  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:19.360149  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:19.360165  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:16.879392  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:18.880717  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:21.380013  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:23.879438  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:26.378684  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:29.361808  221374 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:33:29.374762  221374 api_server.go:72] duration metric: took 4m31.150616097s to wait for apiserver process to appear ...
	I1014 14:33:29.374792  221374 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:33:29.374828  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:29.374884  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:29.419983  221374 cri.go:89] found id: "e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:29.420004  221374 cri.go:89] found id: "f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:29.420009  221374 cri.go:89] found id: ""
	I1014 14:33:29.420017  221374 logs.go:282] 2 containers: [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394]
	I1014 14:33:29.420073  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.423841  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.427501  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:29.427573  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:29.477150  221374 cri.go:89] found id: "7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:29.477176  221374 cri.go:89] found id: "a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:29.477181  221374 cri.go:89] found id: ""
	I1014 14:33:29.477189  221374 logs.go:282] 2 containers: [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1]
	I1014 14:33:29.477255  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.481870  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.485777  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:29.485857  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:29.535435  221374 cri.go:89] found id: "39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:29.535471  221374 cri.go:89] found id: "6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:29.535477  221374 cri.go:89] found id: ""
	I1014 14:33:29.535484  221374 logs.go:282] 2 containers: [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e]
	I1014 14:33:29.535548  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.539227  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.542750  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:29.542822  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:29.586289  221374 cri.go:89] found id: "2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:29.586365  221374 cri.go:89] found id: "b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:29.586385  221374 cri.go:89] found id: ""
	I1014 14:33:29.586410  221374 logs.go:282] 2 containers: [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036]
	I1014 14:33:29.586521  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.590482  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.593883  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:29.593956  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:29.631109  221374 cri.go:89] found id: "3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:29.631131  221374 cri.go:89] found id: "4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:29.631137  221374 cri.go:89] found id: ""
	I1014 14:33:29.631144  221374 logs.go:282] 2 containers: [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea]
	I1014 14:33:29.631217  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.635240  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.639083  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:29.639205  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:29.685804  221374 cri.go:89] found id: "2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:29.685880  221374 cri.go:89] found id: "d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:29.685903  221374 cri.go:89] found id: ""
	I1014 14:33:29.685926  221374 logs.go:282] 2 containers: [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a]
	I1014 14:33:29.686004  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.690215  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.693744  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:29.693822  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:29.733613  221374 cri.go:89] found id: "f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:29.733636  221374 cri.go:89] found id: "74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:29.733642  221374 cri.go:89] found id: ""
	I1014 14:33:29.733649  221374 logs.go:282] 2 containers: [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe]
	I1014 14:33:29.733712  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.738555  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.743407  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:29.743504  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:29.783636  221374 cri.go:89] found id: "20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:29.783660  221374 cri.go:89] found id: ""
	I1014 14:33:29.783668  221374 logs.go:282] 1 containers: [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1]
	I1014 14:33:29.783724  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.788847  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:29.788943  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:29.829919  221374 cri.go:89] found id: "56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:29.829983  221374 cri.go:89] found id: "8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:29.829997  221374 cri.go:89] found id: ""
	I1014 14:33:29.830005  221374 logs.go:282] 2 containers: [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de]
	I1014 14:33:29.830075  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.833706  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:29.837243  221374 logs.go:123] Gathering logs for storage-provisioner [8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de] ...
	I1014 14:33:29.837271  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:29.881543  221374 logs.go:123] Gathering logs for kube-apiserver [f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394] ...
	I1014 14:33:29.881579  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:29.941956  221374 logs.go:123] Gathering logs for etcd [a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1] ...
	I1014 14:33:29.941991  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:29.992462  221374 logs.go:123] Gathering logs for kube-scheduler [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45] ...
	I1014 14:33:29.992509  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:30.086659  221374 logs.go:123] Gathering logs for kube-scheduler [b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036] ...
	I1014 14:33:30.086752  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:30.154516  221374 logs.go:123] Gathering logs for kube-proxy [4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea] ...
	I1014 14:33:30.154557  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:30.201141  221374 logs.go:123] Gathering logs for kindnet [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b] ...
	I1014 14:33:30.201185  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:30.252413  221374 logs.go:123] Gathering logs for kube-apiserver [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad] ...
	I1014 14:33:30.252442  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:30.320168  221374 logs.go:123] Gathering logs for coredns [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600] ...
	I1014 14:33:30.320204  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:30.364040  221374 logs.go:123] Gathering logs for storage-provisioner [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2] ...
	I1014 14:33:30.364074  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:28.378972  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:30.383427  216259 pod_ready.go:103] pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace has status "Ready":"False"
	I1014 14:33:30.409888  221374 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:30.409960  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:30.481794  221374 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:30.481837  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:30.501698  221374 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:30.501737  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:30.632152  221374 logs.go:123] Gathering logs for coredns [6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e] ...
	I1014 14:33:30.632186  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:30.676785  221374 logs.go:123] Gathering logs for kube-proxy [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42] ...
	I1014 14:33:30.676817  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:30.719795  221374 logs.go:123] Gathering logs for kube-controller-manager [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78] ...
	I1014 14:33:30.719879  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:30.786619  221374 logs.go:123] Gathering logs for container status ...
	I1014 14:33:30.786652  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:30.831788  221374 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:30.831817  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:30.881622  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:30.881944  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:30.914239  221374 logs.go:123] Gathering logs for etcd [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42] ...
	I1014 14:33:30.914278  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:30.959638  221374 logs.go:123] Gathering logs for kube-controller-manager [d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a] ...
	I1014 14:33:30.959672  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:31.016588  221374 logs.go:123] Gathering logs for kindnet [74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe] ...
	I1014 14:33:31.016627  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:31.065241  221374 logs.go:123] Gathering logs for kubernetes-dashboard [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1] ...
	I1014 14:33:31.065271  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:31.118164  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:31.118188  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:31.118268  221374 out.go:270] X Problems detected in kubelet:
	W1014 14:33:31.118281  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:31.118288  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:31.118309  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:31.118317  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:32.379471  216259 pod_ready.go:82] duration metric: took 4m0.006953195s for pod "metrics-server-9975d5f86-zks7j" in "kube-system" namespace to be "Ready" ...
	E1014 14:33:32.379497  216259 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1014 14:33:32.379508  216259 pod_ready.go:39] duration metric: took 5m27.807760732s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1014 14:33:32.379522  216259 api_server.go:52] waiting for apiserver process to appear ...
	I1014 14:33:32.379549  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:32.379612  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:32.420858  216259 cri.go:89] found id: "251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:32.420881  216259 cri.go:89] found id: "a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:32.420887  216259 cri.go:89] found id: ""
	I1014 14:33:32.420895  216259 logs.go:282] 2 containers: [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a]
	I1014 14:33:32.420953  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.424649  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.428392  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:32.428468  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:32.480876  216259 cri.go:89] found id: "79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:32.480900  216259 cri.go:89] found id: "c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:32.480905  216259 cri.go:89] found id: ""
	I1014 14:33:32.480913  216259 logs.go:282] 2 containers: [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7]
	I1014 14:33:32.480974  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.484645  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.488128  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:32.488199  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:32.543204  216259 cri.go:89] found id: "5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:32.543226  216259 cri.go:89] found id: "a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:32.543231  216259 cri.go:89] found id: ""
	I1014 14:33:32.543248  216259 logs.go:282] 2 containers: [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847]
	I1014 14:33:32.543309  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.547797  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.552745  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:32.552819  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:32.593623  216259 cri.go:89] found id: "2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:32.593661  216259 cri.go:89] found id: "c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:32.593666  216259 cri.go:89] found id: ""
	I1014 14:33:32.593673  216259 logs.go:282] 2 containers: [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a]
	I1014 14:33:32.593738  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.597620  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.601447  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:32.601514  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:32.638881  216259 cri.go:89] found id: "d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:32.638904  216259 cri.go:89] found id: "2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:32.638909  216259 cri.go:89] found id: ""
	I1014 14:33:32.638917  216259 logs.go:282] 2 containers: [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2]
	I1014 14:33:32.638996  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.642428  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.645883  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:32.645957  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:32.686782  216259 cri.go:89] found id: "1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:32.686807  216259 cri.go:89] found id: "b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:32.686812  216259 cri.go:89] found id: ""
	I1014 14:33:32.686819  216259 logs.go:282] 2 containers: [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341]
	I1014 14:33:32.686878  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.690508  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.693860  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:32.693956  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:32.730043  216259 cri.go:89] found id: "1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:32.730066  216259 cri.go:89] found id: "7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:32.730072  216259 cri.go:89] found id: ""
	I1014 14:33:32.730112  216259 logs.go:282] 2 containers: [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed]
	I1014 14:33:32.730184  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.733712  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.737118  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:32.737184  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:32.792728  216259 cri.go:89] found id: "d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:32.792805  216259 cri.go:89] found id: ""
	I1014 14:33:32.792828  216259 logs.go:282] 1 containers: [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592]
	I1014 14:33:32.792920  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.798246  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:32.798401  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:32.841099  216259 cri.go:89] found id: "9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:32.841124  216259 cri.go:89] found id: "72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:32.841129  216259 cri.go:89] found id: ""
	I1014 14:33:32.841137  216259 logs.go:282] 2 containers: [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc]
	I1014 14:33:32.841217  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.844864  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:32.848782  216259 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:32.848824  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:32.909970  216259 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:32.910006  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:33.063452  216259 logs.go:123] Gathering logs for kube-scheduler [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85] ...
	I1014 14:33:33.063486  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:33.106049  216259 logs.go:123] Gathering logs for kube-proxy [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560] ...
	I1014 14:33:33.106082  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:33.144927  216259 logs.go:123] Gathering logs for kube-proxy [2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2] ...
	I1014 14:33:33.145001  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:33.186145  216259 logs.go:123] Gathering logs for kube-controller-manager [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0] ...
	I1014 14:33:33.186172  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:33.247100  216259 logs.go:123] Gathering logs for kube-controller-manager [b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341] ...
	I1014 14:33:33.247133  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:33.329663  216259 logs.go:123] Gathering logs for storage-provisioner [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98] ...
	I1014 14:33:33.329749  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:33.376420  216259 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:33.376514  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:33.394320  216259 logs.go:123] Gathering logs for etcd [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667] ...
	I1014 14:33:33.394350  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:33.436562  216259 logs.go:123] Gathering logs for etcd [c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7] ...
	I1014 14:33:33.436589  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:33.499492  216259 logs.go:123] Gathering logs for coredns [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900] ...
	I1014 14:33:33.499526  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:33.542632  216259 logs.go:123] Gathering logs for kube-scheduler [c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a] ...
	I1014 14:33:33.542660  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:33.586443  216259 logs.go:123] Gathering logs for kindnet [7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed] ...
	I1014 14:33:33.586475  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:33.631465  216259 logs.go:123] Gathering logs for container status ...
	I1014 14:33:33.631494  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:33.691723  216259 logs.go:123] Gathering logs for kube-apiserver [a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a] ...
	I1014 14:33:33.691754  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:33.767285  216259 logs.go:123] Gathering logs for coredns [a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847] ...
	I1014 14:33:33.767317  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:33.807954  216259 logs.go:123] Gathering logs for kindnet [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f] ...
	I1014 14:33:33.807984  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:33.859116  216259 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:33.859148  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:33.914903  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.266848     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.915110  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.684312     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.919359  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:26 old-k8s-version-805757 kubelet[663]: E1014 14:28:26.249917     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.919970  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:27 old-k8s-version-805757 kubelet[663]: E1014 14:28:27.815653     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.920299  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:28 old-k8s-version-805757 kubelet[663]: E1014 14:28:28.824018     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.920954  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:35 old-k8s-version-805757 kubelet[663]: E1014 14:28:35.818926     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.921450  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:37 old-k8s-version-805757 kubelet[663]: E1014 14:28:37.852417     663 pod_workers.go:191] Error syncing pod 98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593 ("storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"
	W1014 14:33:33.921637  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:41 old-k8s-version-805757 kubelet[663]: E1014 14:28:41.310054     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.922551  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:50 old-k8s-version-805757 kubelet[663]: E1014 14:28:50.896758     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.925140  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:53 old-k8s-version-805757 kubelet[663]: E1014 14:28:53.334247     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.925466  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:55 old-k8s-version-805757 kubelet[663]: E1014 14:28:55.818942     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.925650  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:06 old-k8s-version-805757 kubelet[663]: E1014 14:29:06.323071     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.925979  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:08 old-k8s-version-805757 kubelet[663]: E1014 14:29:08.309155     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.926162  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:18 old-k8s-version-805757 kubelet[663]: E1014 14:29:18.309720     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.926754  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:22 old-k8s-version-805757 kubelet[663]: E1014 14:29:22.029488     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.927087  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:25 old-k8s-version-805757 kubelet[663]: E1014 14:29:25.818872     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.927274  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:31 old-k8s-version-805757 kubelet[663]: E1014 14:29:31.309563     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.927604  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:37 old-k8s-version-805757 kubelet[663]: E1014 14:29:37.309788     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.930029  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:43 old-k8s-version-805757 kubelet[663]: E1014 14:29:43.326510     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.930355  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:52 old-k8s-version-805757 kubelet[663]: E1014 14:29:52.309613     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.930539  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:55 old-k8s-version-805757 kubelet[663]: E1014 14:29:55.314732     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.931123  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:05 old-k8s-version-805757 kubelet[663]: E1014 14:30:05.189312     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.931457  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:06 old-k8s-version-805757 kubelet[663]: E1014 14:30:06.193682     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.931692  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:07 old-k8s-version-805757 kubelet[663]: E1014 14:30:07.313589     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.931887  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:19 old-k8s-version-805757 kubelet[663]: E1014 14:30:19.309716     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.932213  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:20 old-k8s-version-805757 kubelet[663]: E1014 14:30:20.309049     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.932397  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:31 old-k8s-version-805757 kubelet[663]: E1014 14:30:31.309588     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.932739  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:32 old-k8s-version-805757 kubelet[663]: E1014 14:30:32.309235     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.932923  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:42 old-k8s-version-805757 kubelet[663]: E1014 14:30:42.309932     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.933269  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:44 old-k8s-version-805757 kubelet[663]: E1014 14:30:44.309221     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.933455  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:53 old-k8s-version-805757 kubelet[663]: E1014 14:30:53.316385     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.933790  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:56 old-k8s-version-805757 kubelet[663]: E1014 14:30:56.309146     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.936214  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:07 old-k8s-version-805757 kubelet[663]: E1014 14:31:07.318192     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:33.936546  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:09 old-k8s-version-805757 kubelet[663]: E1014 14:31:09.309131     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.936876  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.309175     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.937067  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.310110     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.937382  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.309480     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.937836  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.435860     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938165  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:35 old-k8s-version-805757 kubelet[663]: E1014 14:31:35.819352     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938348  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:47 old-k8s-version-805757 kubelet[663]: E1014 14:31:47.309512     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.938678  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:50 old-k8s-version-805757 kubelet[663]: E1014 14:31:50.309308     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.938860  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:02 old-k8s-version-805757 kubelet[663]: E1014 14:32:02.309592     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.939185  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:05 old-k8s-version-805757 kubelet[663]: E1014 14:32:05.313117     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.939373  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:14 old-k8s-version-805757 kubelet[663]: E1014 14:32:14.309655     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.939697  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:16 old-k8s-version-805757 kubelet[663]: E1014 14:32:16.309258     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.939879  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:27 old-k8s-version-805757 kubelet[663]: E1014 14:32:27.309768     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.940225  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: E1014 14:32:31.309863     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.940409  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:41 old-k8s-version-805757 kubelet[663]: E1014 14:32:41.312511     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.940736  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: E1014 14:32:42.309556     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941071  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: E1014 14:32:53.310344     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941255  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:56 old-k8s-version-805757 kubelet[663]: E1014 14:32:56.309439     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.941579  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.941762  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.942087  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:33.942269  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:33.942596  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	I1014 14:33:33.942606  216259 logs.go:123] Gathering logs for kube-apiserver [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75] ...
	I1014 14:33:33.942620  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:34.026421  216259 logs.go:123] Gathering logs for kubernetes-dashboard [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592] ...
	I1014 14:33:34.026454  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:34.078590  216259 logs.go:123] Gathering logs for storage-provisioner [72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc] ...
	I1014 14:33:34.078624  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:34.119813  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:34.119843  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:34.119917  216259 out.go:270] X Problems detected in kubelet:
	W1014 14:33:34.119930  216259 out.go:270]   Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:34.119938  216259 out.go:270]   Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:34.119967  216259 out.go:270]   Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:34.119983  216259 out.go:270]   Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:34.120001  216259 out.go:270]   Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	I1014 14:33:34.120016  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:34.120023  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:41.119719  221374 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1014 14:33:41.128530  221374 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1014 14:33:41.129521  221374 api_server.go:141] control plane version: v1.31.1
	I1014 14:33:41.129549  221374 api_server.go:131] duration metric: took 11.754750193s to wait for apiserver health ...
	I1014 14:33:41.129559  221374 system_pods.go:43] waiting for kube-system pods to appear ...
	I1014 14:33:41.129581  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:41.129644  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:41.172939  221374 cri.go:89] found id: "e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:41.172964  221374 cri.go:89] found id: "f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:41.172970  221374 cri.go:89] found id: ""
	I1014 14:33:41.172977  221374 logs.go:282] 2 containers: [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394]
	I1014 14:33:41.173032  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.177247  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.181210  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:41.181285  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:41.221162  221374 cri.go:89] found id: "7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:41.221185  221374 cri.go:89] found id: "a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:41.221190  221374 cri.go:89] found id: ""
	I1014 14:33:41.221198  221374 logs.go:282] 2 containers: [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1]
	I1014 14:33:41.221255  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.224988  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.228876  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:41.228991  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:41.271645  221374 cri.go:89] found id: "39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:41.271676  221374 cri.go:89] found id: "6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:41.271687  221374 cri.go:89] found id: ""
	I1014 14:33:41.271694  221374 logs.go:282] 2 containers: [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e]
	I1014 14:33:41.271768  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.275279  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.278860  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:41.278931  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:41.327799  221374 cri.go:89] found id: "2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:41.327822  221374 cri.go:89] found id: "b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:41.327827  221374 cri.go:89] found id: ""
	I1014 14:33:41.327835  221374 logs.go:282] 2 containers: [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036]
	I1014 14:33:41.327908  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.331686  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.335398  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:41.335474  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:41.372822  221374 cri.go:89] found id: "3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:41.372844  221374 cri.go:89] found id: "4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:41.372849  221374 cri.go:89] found id: ""
	I1014 14:33:41.372856  221374 logs.go:282] 2 containers: [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea]
	I1014 14:33:41.372912  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.376783  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.380253  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:41.380378  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:41.421638  221374 cri.go:89] found id: "2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:41.421698  221374 cri.go:89] found id: "d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:41.421718  221374 cri.go:89] found id: ""
	I1014 14:33:41.421741  221374 logs.go:282] 2 containers: [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a]
	I1014 14:33:41.421822  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.425570  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.429635  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:41.429733  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:41.472170  221374 cri.go:89] found id: "f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:41.472189  221374 cri.go:89] found id: "74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:41.472193  221374 cri.go:89] found id: ""
	I1014 14:33:41.472200  221374 logs.go:282] 2 containers: [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe]
	I1014 14:33:41.472259  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.477084  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.481422  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:41.481545  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:41.521997  221374 cri.go:89] found id: "56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:41.522032  221374 cri.go:89] found id: "8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:41.522037  221374 cri.go:89] found id: ""
	I1014 14:33:41.522045  221374 logs.go:282] 2 containers: [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de]
	I1014 14:33:41.522110  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.525899  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.529733  221374 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:41.529831  221374 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:41.576084  221374 cri.go:89] found id: "20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:41.576155  221374 cri.go:89] found id: ""
	I1014 14:33:41.576178  221374 logs.go:282] 1 containers: [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1]
	I1014 14:33:41.576267  221374 ssh_runner.go:195] Run: which crictl
	I1014 14:33:41.579842  221374 logs.go:123] Gathering logs for coredns [39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600] ...
	I1014 14:33:41.579870  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39a1af2c5fd180af9c32f1b16798b6b5ea3299873a4b1fc9ec61355612de7600"
	I1014 14:33:41.621340  221374 logs.go:123] Gathering logs for kube-proxy [3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42] ...
	I1014 14:33:41.621373  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3e3fb59f2932707f2da43d380b0bbf913a8fbb121f8fdbc0f4b71ce7ef81db42"
	I1014 14:33:41.662517  221374 logs.go:123] Gathering logs for kindnet [f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b] ...
	I1014 14:33:41.662545  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5a7360c53e6dcc6eb70512bbd44fc5f93af1812ff0009b6fb2ecaea9350f00b"
	I1014 14:33:41.711910  221374 logs.go:123] Gathering logs for container status ...
	I1014 14:33:41.711943  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:41.756526  221374 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:41.756559  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:41.820543  221374 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:41.820586  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:41.949009  221374 logs.go:123] Gathering logs for etcd [7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42] ...
	I1014 14:33:41.949037  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7a23297f8b742bd05d7a4f4f8c1666635c3e11f09c39bcbc93edc350f54b7f42"
	I1014 14:33:41.997736  221374 logs.go:123] Gathering logs for kube-scheduler [2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45] ...
	I1014 14:33:41.997771  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2eabdafd5b9fe67ded84ac3e07c7aac1549c0d816cc4df8f42705c201500ca45"
	I1014 14:33:42.061231  221374 logs.go:123] Gathering logs for kube-controller-manager [2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78] ...
	I1014 14:33:42.061305  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ae76cd5e0becc958081916f3ddfeac6a70fe9b6be55a2cbfb88550acd880c78"
	I1014 14:33:42.170189  221374 logs.go:123] Gathering logs for kindnet [74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe] ...
	I1014 14:33:42.171705  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 74eaf8d582efa3dfb728e8d43016ec52ef67ae2f71db70cd197a32da1e4aaabe"
	I1014 14:33:42.245255  221374 logs.go:123] Gathering logs for storage-provisioner [56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2] ...
	I1014 14:33:42.245291  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56956e7eaf546abfe0db6b267bbf3153f86a61390f167b0f1e426a0844b80de2"
	I1014 14:33:42.295737  221374 logs.go:123] Gathering logs for storage-provisioner [8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de] ...
	I1014 14:33:42.295816  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8d49746b9b00eef68fa8aea7e15cc8b9d8ae1f83529dcdb17b74d8b64e76d8de"
	I1014 14:33:42.340872  221374 logs.go:123] Gathering logs for kubernetes-dashboard [20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1] ...
	I1014 14:33:42.340904  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 20520ee100e1b0cbf2059726e645b35dc1dbf4f5a6f1df5c05f5b7530b8c49b1"
	I1014 14:33:42.391558  221374 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:42.391588  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:42.408991  221374 logs.go:123] Gathering logs for kube-apiserver [e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad] ...
	I1014 14:33:42.409020  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e58f4fa3217e55f7d33e39793bcb94041d7c4e2aa8210f442e54126a0c2503ad"
	I1014 14:33:42.487325  221374 logs.go:123] Gathering logs for kube-apiserver [f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394] ...
	I1014 14:33:42.487358  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f7f6f14df48eb2e8a0f74267278fc99b340b9cd7a6d1d05c977f6b10aaeb3394"
	I1014 14:33:42.545043  221374 logs.go:123] Gathering logs for etcd [a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1] ...
	I1014 14:33:42.545109  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a59966089808d368966d041b915834de840109495ea09c53ba476d62c27311a1"
	I1014 14:33:42.595915  221374 logs.go:123] Gathering logs for kube-scheduler [b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036] ...
	I1014 14:33:42.595958  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b76ea8b9f5d7575fc92a85547c93624a253ab47d7282ef0a4fbaec7476a7b036"
	I1014 14:33:42.658268  221374 logs.go:123] Gathering logs for kube-controller-manager [d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a] ...
	I1014 14:33:42.658299  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d87c0b5c3e5f6d53e215c9b94b3f3cca514216a568ea540dff26983f4bfd328a"
	I1014 14:33:42.730912  221374 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:42.730947  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:42.777539  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:42.777823  221374 logs.go:138] Found kubelet problem: Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:42.815000  221374 logs.go:123] Gathering logs for coredns [6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e] ...
	I1014 14:33:42.815057  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6f6ef466d566710cd5dd06872e0479083fbd53be9fb10b40bcc90af7769f6a6e"
	I1014 14:33:42.864030  221374 logs.go:123] Gathering logs for kube-proxy [4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea] ...
	I1014 14:33:42.864059  221374 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4405c0a43acd254020ed58740f4882cb9be99bfe6344a875152e769c3bea55ea"
	I1014 14:33:42.908744  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:42.908778  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:42.908858  221374 out.go:270] X Problems detected in kubelet:
	W1014 14:33:42.908873  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: W1014 14:29:07.633342     658 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-683238" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-683238' and this object
	W1014 14:33:42.908880  221374 out.go:270]   Oct 14 14:29:07 no-preload-683238 kubelet[658]: E1014 14:29:07.633448     658 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-683238\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-683238' and this object" logger="UnhandledError"
	I1014 14:33:42.908889  221374 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:42.908904  221374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:44.120900  216259 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:33:44.133023  216259 api_server.go:72] duration metric: took 5m58.684236678s to wait for apiserver process to appear ...
	I1014 14:33:44.133085  216259 api_server.go:88] waiting for apiserver healthz status ...
	I1014 14:33:44.133120  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I1014 14:33:44.133174  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1014 14:33:44.170555  216259 cri.go:89] found id: "251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:44.170580  216259 cri.go:89] found id: "a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:44.170586  216259 cri.go:89] found id: ""
	I1014 14:33:44.170594  216259 logs.go:282] 2 containers: [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a]
	I1014 14:33:44.170646  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.174119  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.177502  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I1014 14:33:44.177578  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1014 14:33:44.224527  216259 cri.go:89] found id: "79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:44.224545  216259 cri.go:89] found id: "c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:44.224550  216259 cri.go:89] found id: ""
	I1014 14:33:44.224557  216259 logs.go:282] 2 containers: [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7]
	I1014 14:33:44.224612  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.228575  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.232598  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I1014 14:33:44.232668  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1014 14:33:44.270635  216259 cri.go:89] found id: "5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:44.270658  216259 cri.go:89] found id: "a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:44.270663  216259 cri.go:89] found id: ""
	I1014 14:33:44.270671  216259 logs.go:282] 2 containers: [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847]
	I1014 14:33:44.270726  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.274335  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.277724  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I1014 14:33:44.277802  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1014 14:33:44.317752  216259 cri.go:89] found id: "2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:44.317776  216259 cri.go:89] found id: "c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:44.317781  216259 cri.go:89] found id: ""
	I1014 14:33:44.317788  216259 logs.go:282] 2 containers: [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a]
	I1014 14:33:44.317870  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.321413  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.325175  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I1014 14:33:44.325249  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1014 14:33:44.362783  216259 cri.go:89] found id: "d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:44.362817  216259 cri.go:89] found id: "2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:44.362823  216259 cri.go:89] found id: ""
	I1014 14:33:44.362830  216259 logs.go:282] 2 containers: [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2]
	I1014 14:33:44.362887  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.366408  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.370140  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I1014 14:33:44.370214  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1014 14:33:44.417871  216259 cri.go:89] found id: "1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:44.417896  216259 cri.go:89] found id: "b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:44.417902  216259 cri.go:89] found id: ""
	I1014 14:33:44.417909  216259 logs.go:282] 2 containers: [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341]
	I1014 14:33:44.417994  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.421787  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.425502  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I1014 14:33:44.425596  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1014 14:33:44.482599  216259 cri.go:89] found id: "1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:44.482628  216259 cri.go:89] found id: "7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:44.482634  216259 cri.go:89] found id: ""
	I1014 14:33:44.482641  216259 logs.go:282] 2 containers: [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed]
	I1014 14:33:44.482714  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.486782  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.490394  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I1014 14:33:44.490491  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1014 14:33:44.528561  216259 cri.go:89] found id: "9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:44.528583  216259 cri.go:89] found id: "72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:44.528589  216259 cri.go:89] found id: ""
	I1014 14:33:44.528595  216259 logs.go:282] 2 containers: [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc]
	I1014 14:33:44.528649  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.532284  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.535798  216259 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1014 14:33:44.535874  216259 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1014 14:33:44.578853  216259 cri.go:89] found id: "d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:44.578877  216259 cri.go:89] found id: ""
	I1014 14:33:44.578885  216259 logs.go:282] 1 containers: [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592]
	I1014 14:33:44.578961  216259 ssh_runner.go:195] Run: which crictl
	I1014 14:33:44.582867  216259 logs.go:123] Gathering logs for describe nodes ...
	I1014 14:33:44.582892  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1014 14:33:44.752896  216259 logs.go:123] Gathering logs for kube-apiserver [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75] ...
	I1014 14:33:44.752928  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75"
	I1014 14:33:44.830813  216259 logs.go:123] Gathering logs for kube-apiserver [a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a] ...
	I1014 14:33:44.830847  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a"
	I1014 14:33:44.889290  216259 logs.go:123] Gathering logs for etcd [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667] ...
	I1014 14:33:44.889327  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667"
	I1014 14:33:44.942768  216259 logs.go:123] Gathering logs for coredns [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900] ...
	I1014 14:33:44.942800  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900"
	I1014 14:33:44.986308  216259 logs.go:123] Gathering logs for kube-scheduler [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85] ...
	I1014 14:33:44.986352  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85"
	I1014 14:33:45.073595  216259 logs.go:123] Gathering logs for kubelet ...
	I1014 14:33:45.073705  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1014 14:33:45.198592  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.266848     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.198804  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:06 old-k8s-version-805757 kubelet[663]: E1014 14:28:06.684312     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.203091  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:26 old-k8s-version-805757 kubelet[663]: E1014 14:28:26.249917     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.203689  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:27 old-k8s-version-805757 kubelet[663]: E1014 14:28:27.815653     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.204013  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:28 old-k8s-version-805757 kubelet[663]: E1014 14:28:28.824018     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.204666  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:35 old-k8s-version-805757 kubelet[663]: E1014 14:28:35.818926     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.205116  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:37 old-k8s-version-805757 kubelet[663]: E1014 14:28:37.852417     663 pod_workers.go:191] Error syncing pod 98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593 ("storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(98cb5c4c-6a3c-475a-bd4a-7fbf7a77d593)"
	W1014 14:33:45.205314  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:41 old-k8s-version-805757 kubelet[663]: E1014 14:28:41.310054     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.206366  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:50 old-k8s-version-805757 kubelet[663]: E1014 14:28:50.896758     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.218156  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:53 old-k8s-version-805757 kubelet[663]: E1014 14:28:53.334247     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.218513  216259 logs.go:138] Found kubelet problem: Oct 14 14:28:55 old-k8s-version-805757 kubelet[663]: E1014 14:28:55.818942     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.218699  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:06 old-k8s-version-805757 kubelet[663]: E1014 14:29:06.323071     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.219022  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:08 old-k8s-version-805757 kubelet[663]: E1014 14:29:08.309155     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.219202  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:18 old-k8s-version-805757 kubelet[663]: E1014 14:29:18.309720     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.219791  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:22 old-k8s-version-805757 kubelet[663]: E1014 14:29:22.029488     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.220114  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:25 old-k8s-version-805757 kubelet[663]: E1014 14:29:25.818872     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.220295  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:31 old-k8s-version-805757 kubelet[663]: E1014 14:29:31.309563     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.220631  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:37 old-k8s-version-805757 kubelet[663]: E1014 14:29:37.309788     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.223090  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:43 old-k8s-version-805757 kubelet[663]: E1014 14:29:43.326510     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.223425  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:52 old-k8s-version-805757 kubelet[663]: E1014 14:29:52.309613     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.223604  216259 logs.go:138] Found kubelet problem: Oct 14 14:29:55 old-k8s-version-805757 kubelet[663]: E1014 14:29:55.314732     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.224185  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:05 old-k8s-version-805757 kubelet[663]: E1014 14:30:05.189312     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.224511  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:06 old-k8s-version-805757 kubelet[663]: E1014 14:30:06.193682     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.224696  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:07 old-k8s-version-805757 kubelet[663]: E1014 14:30:07.313589     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.224880  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:19 old-k8s-version-805757 kubelet[663]: E1014 14:30:19.309716     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.225215  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:20 old-k8s-version-805757 kubelet[663]: E1014 14:30:20.309049     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.225399  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:31 old-k8s-version-805757 kubelet[663]: E1014 14:30:31.309588     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.225733  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:32 old-k8s-version-805757 kubelet[663]: E1014 14:30:32.309235     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.226050  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:42 old-k8s-version-805757 kubelet[663]: E1014 14:30:42.309932     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.226377  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:44 old-k8s-version-805757 kubelet[663]: E1014 14:30:44.309221     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.226560  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:53 old-k8s-version-805757 kubelet[663]: E1014 14:30:53.316385     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.226889  216259 logs.go:138] Found kubelet problem: Oct 14 14:30:56 old-k8s-version-805757 kubelet[663]: E1014 14:30:56.309146     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.229323  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:07 old-k8s-version-805757 kubelet[663]: E1014 14:31:07.318192     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	W1014 14:33:45.229654  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:09 old-k8s-version-805757 kubelet[663]: E1014 14:31:09.309131     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.229981  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.309175     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.230162  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:22 old-k8s-version-805757 kubelet[663]: E1014 14:31:22.310110     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.230471  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.309480     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.230923  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:34 old-k8s-version-805757 kubelet[663]: E1014 14:31:34.435860     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.231247  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:35 old-k8s-version-805757 kubelet[663]: E1014 14:31:35.819352     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.236078  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:47 old-k8s-version-805757 kubelet[663]: E1014 14:31:47.309512     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.237896  216259 logs.go:138] Found kubelet problem: Oct 14 14:31:50 old-k8s-version-805757 kubelet[663]: E1014 14:31:50.309308     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.238465  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:02 old-k8s-version-805757 kubelet[663]: E1014 14:32:02.309592     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.252418  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:05 old-k8s-version-805757 kubelet[663]: E1014 14:32:05.313117     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.252614  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:14 old-k8s-version-805757 kubelet[663]: E1014 14:32:14.309655     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.252960  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:16 old-k8s-version-805757 kubelet[663]: E1014 14:32:16.309258     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.253193  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:27 old-k8s-version-805757 kubelet[663]: E1014 14:32:27.309768     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.253523  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: E1014 14:32:31.309863     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.254681  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:41 old-k8s-version-805757 kubelet[663]: E1014 14:32:41.312511     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.255240  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: E1014 14:32:42.309556     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.270793  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: E1014 14:32:53.310344     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.271466  216259 logs.go:138] Found kubelet problem: Oct 14 14:32:56 old-k8s-version-805757 kubelet[663]: E1014 14:32:56.309439     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.273796  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.274454  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.274969  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.275169  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:45.275499  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:45.275737  216259 logs.go:138] Found kubelet problem: Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1014 14:33:45.275746  216259 logs.go:123] Gathering logs for dmesg ...
	I1014 14:33:45.275762  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1014 14:33:45.298044  216259 logs.go:123] Gathering logs for containerd ...
	I1014 14:33:45.298085  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I1014 14:33:45.376459  216259 logs.go:123] Gathering logs for kube-controller-manager [b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341] ...
	I1014 14:33:45.376528  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341"
	I1014 14:33:45.466492  216259 logs.go:123] Gathering logs for storage-provisioner [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98] ...
	I1014 14:33:45.466528  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98"
	I1014 14:33:45.515454  216259 logs.go:123] Gathering logs for kube-controller-manager [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0] ...
	I1014 14:33:45.515479  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0"
	I1014 14:33:45.569851  216259 logs.go:123] Gathering logs for kubernetes-dashboard [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592] ...
	I1014 14:33:45.569885  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592"
	I1014 14:33:45.618928  216259 logs.go:123] Gathering logs for coredns [a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847] ...
	I1014 14:33:45.618956  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847"
	I1014 14:33:45.664497  216259 logs.go:123] Gathering logs for kube-proxy [2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2] ...
	I1014 14:33:45.664526  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2"
	I1014 14:33:45.705177  216259 logs.go:123] Gathering logs for kindnet [7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed] ...
	I1014 14:33:45.705207  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed"
	I1014 14:33:45.746170  216259 logs.go:123] Gathering logs for storage-provisioner [72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc] ...
	I1014 14:33:45.746197  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc"
	I1014 14:33:45.792090  216259 logs.go:123] Gathering logs for etcd [c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7] ...
	I1014 14:33:45.792120  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7"
	I1014 14:33:45.834397  216259 logs.go:123] Gathering logs for kube-scheduler [c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a] ...
	I1014 14:33:45.834428  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a"
	I1014 14:33:45.878661  216259 logs.go:123] Gathering logs for container status ...
	I1014 14:33:45.878691  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1014 14:33:45.959148  216259 logs.go:123] Gathering logs for kube-proxy [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560] ...
	I1014 14:33:45.959178  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560"
	I1014 14:33:46.009192  216259 logs.go:123] Gathering logs for kindnet [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f] ...
	I1014 14:33:46.009223  216259 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f"
	I1014 14:33:46.062167  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:46.062197  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1014 14:33:46.062248  216259 out.go:270] X Problems detected in kubelet:
	W1014 14:33:46.062263  216259 out.go:270]   Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:46.062281  216259 out.go:270]   Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:46.062290  216259 out.go:270]   Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W1014 14:33:46.062297  216259 out.go:270]   Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	W1014 14:33:46.062302  216259 out.go:270]   Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I1014 14:33:46.062308  216259 out.go:358] Setting ErrFile to fd 2...
	I1014 14:33:46.062319  216259 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:33:52.918074  221374 system_pods.go:59] 9 kube-system pods found
	I1014 14:33:52.918115  221374 system_pods.go:61] "coredns-7c65d6cfc9-f6xml" [99f1489e-cc24-4616-a14c-0623e58ee2bf] Running
	I1014 14:33:52.918121  221374 system_pods.go:61] "etcd-no-preload-683238" [4550a01c-010e-4e36-a4aa-9b2f83c9b713] Running
	I1014 14:33:52.918126  221374 system_pods.go:61] "kindnet-f2688" [5904efd5-0db4-40b7-a60a-c3c7961fcfdc] Running
	I1014 14:33:52.918130  221374 system_pods.go:61] "kube-apiserver-no-preload-683238" [1d6e0716-f895-44c7-9df9-96b5fbdb5142] Running
	I1014 14:33:52.918135  221374 system_pods.go:61] "kube-controller-manager-no-preload-683238" [45b42c37-4f4e-42df-8442-6539db158762] Running
	I1014 14:33:52.918143  221374 system_pods.go:61] "kube-proxy-lkxpz" [d9686a02-e05a-4282-9cc4-4a09854ec0cf] Running
	I1014 14:33:52.918147  221374 system_pods.go:61] "kube-scheduler-no-preload-683238" [2a880ad7-9bc8-476e-953a-75444920457a] Running
	I1014 14:33:52.918154  221374 system_pods.go:61] "metrics-server-6867b74b74-95gsh" [70fab2fb-a15f-40a0-9b6b-303ee26ac20c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 14:33:52.918163  221374 system_pods.go:61] "storage-provisioner" [52685ec1-b851-4fe4-a576-d669f9477d71] Running
	I1014 14:33:52.918171  221374 system_pods.go:74] duration metric: took 11.788606138s to wait for pod list to return data ...
	I1014 14:33:52.918180  221374 default_sa.go:34] waiting for default service account to be created ...
	I1014 14:33:52.921132  221374 default_sa.go:45] found service account: "default"
	I1014 14:33:52.921161  221374 default_sa.go:55] duration metric: took 2.974347ms for default service account to be created ...
	I1014 14:33:52.921171  221374 system_pods.go:116] waiting for k8s-apps to be running ...
	I1014 14:33:52.928572  221374 system_pods.go:86] 9 kube-system pods found
	I1014 14:33:52.928601  221374 system_pods.go:89] "coredns-7c65d6cfc9-f6xml" [99f1489e-cc24-4616-a14c-0623e58ee2bf] Running
	I1014 14:33:52.928609  221374 system_pods.go:89] "etcd-no-preload-683238" [4550a01c-010e-4e36-a4aa-9b2f83c9b713] Running
	I1014 14:33:52.928613  221374 system_pods.go:89] "kindnet-f2688" [5904efd5-0db4-40b7-a60a-c3c7961fcfdc] Running
	I1014 14:33:52.928617  221374 system_pods.go:89] "kube-apiserver-no-preload-683238" [1d6e0716-f895-44c7-9df9-96b5fbdb5142] Running
	I1014 14:33:52.928623  221374 system_pods.go:89] "kube-controller-manager-no-preload-683238" [45b42c37-4f4e-42df-8442-6539db158762] Running
	I1014 14:33:52.928627  221374 system_pods.go:89] "kube-proxy-lkxpz" [d9686a02-e05a-4282-9cc4-4a09854ec0cf] Running
	I1014 14:33:52.928631  221374 system_pods.go:89] "kube-scheduler-no-preload-683238" [2a880ad7-9bc8-476e-953a-75444920457a] Running
	I1014 14:33:52.928640  221374 system_pods.go:89] "metrics-server-6867b74b74-95gsh" [70fab2fb-a15f-40a0-9b6b-303ee26ac20c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1014 14:33:52.928649  221374 system_pods.go:89] "storage-provisioner" [52685ec1-b851-4fe4-a576-d669f9477d71] Running
	I1014 14:33:52.928657  221374 system_pods.go:126] duration metric: took 7.479969ms to wait for k8s-apps to be running ...
	I1014 14:33:52.928677  221374 system_svc.go:44] waiting for kubelet service to be running ....
	I1014 14:33:52.928736  221374 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:33:52.940929  221374 system_svc.go:56] duration metric: took 12.243438ms WaitForService to wait for kubelet
	I1014 14:33:52.941004  221374 kubeadm.go:582] duration metric: took 4m54.716852674s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1014 14:33:52.941031  221374 node_conditions.go:102] verifying NodePressure condition ...
	I1014 14:33:52.944285  221374 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1014 14:33:52.944317  221374 node_conditions.go:123] node cpu capacity is 2
	I1014 14:33:52.944330  221374 node_conditions.go:105] duration metric: took 3.292526ms to run NodePressure ...
	I1014 14:33:52.944342  221374 start.go:241] waiting for startup goroutines ...
	I1014 14:33:52.944348  221374 start.go:246] waiting for cluster config update ...
	I1014 14:33:52.944359  221374 start.go:255] writing updated cluster config ...
	I1014 14:33:52.944656  221374 ssh_runner.go:195] Run: rm -f paused
	I1014 14:33:53.009851  221374 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1014 14:33:53.012333  221374 out.go:177] * Done! kubectl is now configured to use "no-preload-683238" cluster and "default" namespace by default
	I1014 14:33:56.063973  216259 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1014 14:33:56.076815  216259 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1014 14:33:56.083169  216259 out.go:201] 
	W1014 14:33:56.086386  216259 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W1014 14:33:56.086601  216259 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W1014 14:33:56.086656  216259 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W1014 14:33:56.086698  216259 out.go:270] * 
	W1014 14:33:56.087684  216259 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1014 14:33:56.089991  216259 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	dc5088c9224e7       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   c515cbb37865b       dashboard-metrics-scraper-8d5bb5db8-xjhz5
	9fd76e6e07f51       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   d8994361f0168       storage-provisioner
	d77ae50b9c30c       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   521bff5b7c8e1       kubernetes-dashboard-cd95d586-xqjk6
	72aa351ee44e0       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d8994361f0168       storage-provisioner
	d02c985f81647       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   0671ec5e43a5d       kube-proxy-nj7wx
	33520ce905033       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   b6b8ac925efc8       busybox
	1eb5f19222f62       0bcd66b03df5f       5 minutes ago       Running             kindnet-cni                 1                   11c95ac4d503b       kindnet-8f22s
	5f8a8a7df2783       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   49b9dffc43aee       coredns-74ff55c5b-x5x6d
	251d9455c11c6       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   052721ee8ed41       kube-apiserver-old-k8s-version-805757
	1d3792d83fc3d       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   4378856a9ba8e       kube-controller-manager-old-k8s-version-805757
	2ffd812c6a8f3       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   278c9a81961fb       kube-scheduler-old-k8s-version-805757
	79cda810f8eed       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   b98395c76137e       etcd-old-k8s-version-805757
	0f50ec7800cb3       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   5d08d46fa8271       busybox
	a85040ad3d5d4       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   f90f86b42995b       coredns-74ff55c5b-x5x6d
	7d4c84315f92a       0bcd66b03df5f       8 minutes ago       Exited              kindnet-cni                 0                   9d859fbe1e3b8       kindnet-8f22s
	2ad5db99d8a9f       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   edad9b5fdbdd0       kube-proxy-nj7wx
	b68f537421d89       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   27609bae3ebc5       kube-controller-manager-old-k8s-version-805757
	c197ec8704034       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   b845110602a51       etcd-old-k8s-version-805757
	a6686cf4c0bd5       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   487983048c650       kube-apiserver-old-k8s-version-805757
	c0e7973b2a3d7       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   a4c206544b594       kube-scheduler-old-k8s-version-805757
	
	
	==> containerd <==
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.351981187Z" level=info msg="CreateContainer within sandbox \"c515cbb37865b5da68dee239753e7ad51dbbd39027f7f0bccd608aff756cbcce\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902\""
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.353230680Z" level=info msg="StartContainer for \"65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902\""
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.437542806Z" level=info msg="StartContainer for \"65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902\" returns successfully"
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.471122530Z" level=info msg="shim disconnected" id=65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902 namespace=k8s.io
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.471183108Z" level=warning msg="cleaning up after shim disconnected" id=65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902 namespace=k8s.io
	Oct 14 14:30:04 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:04.471194382Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 14 14:30:05 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:05.195061061Z" level=info msg="RemoveContainer for \"be4620a23ccfb6a727db4f51b7fd92850d9c925f9f74cbcf125a043b3e7d5ee3\""
	Oct 14 14:30:05 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:30:05.200493028Z" level=info msg="RemoveContainer for \"be4620a23ccfb6a727db4f51b7fd92850d9c925f9f74cbcf125a043b3e7d5ee3\" returns successfully"
	Oct 14 14:31:07 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:07.310036533Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:31:07 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:07.315503930Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Oct 14 14:31:07 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:07.317707641Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Oct 14 14:31:07 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:07.317779288Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.312373439Z" level=info msg="CreateContainer within sandbox \"c515cbb37865b5da68dee239753e7ad51dbbd39027f7f0bccd608aff756cbcce\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.331618258Z" level=info msg="CreateContainer within sandbox \"c515cbb37865b5da68dee239753e7ad51dbbd39027f7f0bccd608aff756cbcce\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88\""
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.332223553Z" level=info msg="StartContainer for \"dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88\""
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.414348163Z" level=info msg="StartContainer for \"dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88\" returns successfully"
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.472592481Z" level=info msg="shim disconnected" id=dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88 namespace=k8s.io
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.472651263Z" level=warning msg="cleaning up after shim disconnected" id=dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88 namespace=k8s.io
	Oct 14 14:31:33 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:33.472661905Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Oct 14 14:31:34 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:34.436910158Z" level=info msg="RemoveContainer for \"65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902\""
	Oct 14 14:31:34 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:31:34.448953808Z" level=info msg="RemoveContainer for \"65434ac2ec6313b355ba746c94877a58da6d0c529ff5f810ce63f49bb69e7902\" returns successfully"
	Oct 14 14:33:51 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:33:51.310583061Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:33:51 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:33:51.327756582Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host" host=fake.domain
	Oct 14 14:33:51 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:33:51.329364191Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Oct 14 14:33:51 old-k8s-version-805757 containerd[570]: time="2024-10-14T14:33:51.329481442Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [5f8a8a7df27832dc912d6c524e75b1312fa4e33d94b6dc73cc3554afc4f38900] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:35668 - 8491 "HINFO IN 5319357523280930131.6686186340026150134. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.071234205s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I1014 14:28:36.598158       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-14 14:28:06.597513933 +0000 UTC m=+0.101952940) (total time: 30.000536285s):
	Trace[2019727887]: [30.000536285s] [30.000536285s] END
	E1014 14:28:36.598201       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 14:28:36.598995       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-14 14:28:06.598181558 +0000 UTC m=+0.102620564) (total time: 30.00079456s):
	Trace[939984059]: [30.00079456s] [30.00079456s] END
	E1014 14:28:36.599009       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I1014 14:28:36.601246       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-10-14 14:28:06.598782474 +0000 UTC m=+0.103221481) (total time: 30.002445445s):
	Trace[1474941318]: [30.002445445s] [30.002445445s] END
	E1014 14:28:36.601263       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [a85040ad3d5d4ba4994b0450b39b274c9b3bd1d4a896f3aa97a7b18f001ae847] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = 093a0bf1423dd8c4eee62372bb216168
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:51086 - 40440 "HINFO IN 6009934037175861298.1325771440707215968. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012530787s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-805757
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-805757
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f9f6c2ada6d933af9900f45012fe0fe625736c5b
	                    minikube.k8s.io/name=old-k8s-version-805757
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_14T14_25_21_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 14 Oct 2024 14:25:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-805757
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 14 Oct 2024 14:33:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 14 Oct 2024 14:33:57 +0000   Mon, 14 Oct 2024 14:25:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 14 Oct 2024 14:33:57 +0000   Mon, 14 Oct 2024 14:25:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 14 Oct 2024 14:33:57 +0000   Mon, 14 Oct 2024 14:25:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 14 Oct 2024 14:33:57 +0000   Mon, 14 Oct 2024 14:25:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-805757
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7d8e5a9a8178449b8a2f3795a34035ee
	  System UUID:                44482af2-ef4e-4540-950e-4f6294afbe40
	  Boot ID:                    7f37d908-3a8a-4f73-8f6a-d0166945a75f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 coredns-74ff55c5b-x5x6d                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m21s
	  kube-system                 etcd-old-k8s-version-805757                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m28s
	  kube-system                 kindnet-8f22s                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m20s
	  kube-system                 kube-apiserver-old-k8s-version-805757             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-controller-manager-old-k8s-version-805757    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-proxy-nj7wx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kube-system                 kube-scheduler-old-k8s-version-805757             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 metrics-server-9975d5f86-zks7j                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m35s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-xjhz5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-xqjk6               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m47s (x5 over 8m47s)  kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m47s (x5 over 8m47s)  kubelet     Node old-k8s-version-805757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m47s (x4 over 8m47s)  kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m28s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m28s                  kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m28s                  kubelet     Node old-k8s-version-805757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m28s                  kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m28s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m20s                  kubelet     Node old-k8s-version-805757 status is now: NodeReady
	  Normal  Starting                 8m19s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m4s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x7 over 6m4s)    kubelet     Node old-k8s-version-805757 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x8 over 6m4s)    kubelet     Node old-k8s-version-805757 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m4s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m50s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Oct14 13:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014705] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.413719] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.054156] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.016129] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.802336] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.474781] kauditd_printk_skb: 34 callbacks suppressed
	[Oct14 14:17] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [79cda810f8eedaae70b852ca2bbe9f942797b30c95e1a21f945f4c58a2a82667] <==
	2024-10-14 14:29:52.509913 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:02.510006 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:12.509963 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:22.509990 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:32.509949 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:42.510044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:30:52.510060 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:02.510082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:12.509870 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:22.509975 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:32.509938 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:42.509985 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:31:52.510059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:02.510043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:12.510030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:22.509957 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:32.510001 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:42.510037 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:32:52.509892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:02.509881 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:12.509923 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:22.509953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:32.510059 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:42.511240 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:33:52.510185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [c197ec87040349dc58c31fbfb7bfc3a706e94a38a10ec1ef8a5c3e56acc68de7] <==
	2024-10-14 14:25:11.232986 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2024/10/14 14:25:11 INFO: 9f0758e1c58a86ed is starting a new election at term 1
	raft2024/10/14 14:25:11 INFO: 9f0758e1c58a86ed became candidate at term 2
	raft2024/10/14 14:25:11 INFO: 9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2
	raft2024/10/14 14:25:11 INFO: 9f0758e1c58a86ed became leader at term 2
	raft2024/10/14 14:25:11 INFO: raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2
	2024-10-14 14:25:11.872993 I | etcdserver: published {Name:old-k8s-version-805757 ClientURLs:[https://192.168.85.2:2379]} to cluster 68eaea490fab4e05
	2024-10-14 14:25:11.873144 I | embed: ready to serve client requests
	2024-10-14 14:25:11.874911 I | embed: serving client requests on 127.0.0.1:2379
	2024-10-14 14:25:11.875249 I | etcdserver: setting up the initial cluster version to 3.4
	2024-10-14 14:25:11.875630 I | embed: ready to serve client requests
	2024-10-14 14:25:11.876958 I | embed: serving client requests on 192.168.85.2:2379
	2024-10-14 14:25:11.897817 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-10-14 14:25:11.898285 I | etcdserver/api: enabled capabilities for version 3.4
	2024-10-14 14:25:36.958899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:25:42.283753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:25:52.283572 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:02.283599 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:12.283474 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:22.283667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:32.283591 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:42.283928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:26:52.283494 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:27:02.283542 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-10-14 14:27:12.294356 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 14:33:57 up  1:16,  0 users,  load average: 0.88, 1.58, 2.29
	Linux old-k8s-version-805757 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1eb5f19222f629d40890237b452431b7fec59dca210f80c94703b2a47e4b1a5f] <==
	I1014 14:31:57.326839       1 main.go:300] handling current node
	I1014 14:32:07.317886       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:07.317920       1 main.go:300] handling current node
	I1014 14:32:17.322358       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:17.322402       1 main.go:300] handling current node
	I1014 14:32:27.325157       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:27.325195       1 main.go:300] handling current node
	I1014 14:32:37.326924       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:37.326962       1 main.go:300] handling current node
	I1014 14:32:47.321591       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:47.321637       1 main.go:300] handling current node
	I1014 14:32:57.325148       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:32:57.325182       1 main.go:300] handling current node
	I1014 14:33:07.318504       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:07.318538       1 main.go:300] handling current node
	I1014 14:33:17.325300       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:17.325332       1 main.go:300] handling current node
	I1014 14:33:27.325170       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:27.325202       1 main.go:300] handling current node
	I1014 14:33:37.325173       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:37.325207       1 main.go:300] handling current node
	I1014 14:33:47.323375       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:47.323411       1 main.go:300] handling current node
	I1014 14:33:57.326056       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:33:57.326089       1 main.go:300] handling current node
	
	
	==> kindnet [7d4c84315f92a180b18b91af4a17a095d3b3f788c1c3310a8ad6e6a6174b60ed] <==
	I1014 14:25:40.618301       1 controller.go:342] Waiting for informer caches to sync
	I1014 14:25:40.618400       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1014 14:25:40.818904       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1014 14:25:40.818933       1 metrics.go:61] Registering metrics
	I1014 14:25:40.818989       1 controller.go:378] Syncing nftables rules
	I1014 14:25:50.626294       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:25:50.626358       1 main.go:300] handling current node
	I1014 14:26:00.618201       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:00.618237       1 main.go:300] handling current node
	I1014 14:26:10.627088       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:10.627123       1 main.go:300] handling current node
	I1014 14:26:20.622112       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:20.622148       1 main.go:300] handling current node
	I1014 14:26:30.618286       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:30.618327       1 main.go:300] handling current node
	I1014 14:26:40.617965       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:40.617999       1 main.go:300] handling current node
	I1014 14:26:50.617423       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:26:50.617463       1 main.go:300] handling current node
	I1014 14:27:00.618363       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:27:00.618397       1 main.go:300] handling current node
	I1014 14:27:10.618421       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:27:10.618460       1 main.go:300] handling current node
	I1014 14:27:20.624554       1 main.go:296] Handling node with IPs: map[192.168.85.2:{}]
	I1014 14:27:20.624602       1 main.go:300] handling current node
	
	
	==> kube-apiserver [251d9455c11c639858e466963ec56324aeb83e9fc382fbe263872a371f538c75] <==
	I1014 14:30:19.749154       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:30:19.749164       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1014 14:30:50.106775       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:30:50.106823       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:30:50.106834       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1014 14:31:08.177190       1 handler_proxy.go:102] no RequestInfo found in the context
	E1014 14:31:08.177297       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1014 14:31:08.177313       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 14:31:34.527496       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:31:34.527542       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:31:34.527550       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1014 14:32:19.323887       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:32:19.323930       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:32:19.323940       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1014 14:33:03.862456       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:33:03.862500       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:33:03.862533       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W1014 14:33:05.538786       1 handler_proxy.go:102] no RequestInfo found in the context
	E1014 14:33:05.538859       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1014 14:33:05.538873       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1014 14:33:37.393406       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:33:37.393496       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:33:37.393539       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [a6686cf4c0bd51dce22fafc941c8449ae0ef2121b23d5e7627deed041cf5110a] <==
	I1014 14:25:18.481990       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1014 14:25:18.482019       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1014 14:25:18.510002       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I1014 14:25:18.515323       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I1014 14:25:18.515349       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1014 14:25:18.974701       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1014 14:25:19.032762       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1014 14:25:19.175661       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1014 14:25:19.176807       1 controller.go:606] quota admission added evaluator for: endpoints
	I1014 14:25:19.181490       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1014 14:25:20.125180       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1014 14:25:20.811480       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1014 14:25:20.947232       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1014 14:25:29.300810       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1014 14:25:36.914002       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1014 14:25:36.936560       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1014 14:25:54.083443       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:25:54.083495       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:25:54.083505       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1014 14:26:34.805708       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:26:34.805931       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:26:34.806081       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I1014 14:27:09.145404       1 client.go:360] parsed scheme: "passthrough"
	I1014 14:27:09.145483       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I1014 14:27:09.145497       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [1d3792d83fc3d30024de05f8b74700440d6d0332131afc49b3bb30a2654a5ff0] <==
	E1014 14:29:52.802030       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:30:00.932610       1 request.go:655] Throttling request took 1.048482506s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1014 14:30:01.784178       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:30:23.304432       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:30:33.434621       1 request.go:655] Throttling request took 1.048468783s, request: GET:https://192.168.85.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W1014 14:30:34.286203       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:30:53.806664       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:31:05.936634       1 request.go:655] Throttling request took 1.048360885s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1014 14:31:06.788187       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:31:24.308626       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:31:38.438740       1 request.go:655] Throttling request took 1.047704135s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W1014 14:31:39.290402       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:31:54.810492       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:32:10.940982       1 request.go:655] Throttling request took 1.048367158s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1014 14:32:11.792471       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:32:25.314541       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:32:43.442907       1 request.go:655] Throttling request took 1.048441929s, request: GET:https://192.168.85.2:8443/apis/scheduling.k8s.io/v1beta1?timeout=32s
	W1014 14:32:44.294380       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:32:55.816387       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:33:15.945084       1 request.go:655] Throttling request took 1.048467183s, request: GET:https://192.168.85.2:8443/apis/extensions/v1beta1?timeout=32s
	W1014 14:33:16.796404       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:33:26.318243       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I1014 14:33:48.447010       1 request.go:655] Throttling request took 1.048408477s, request: GET:https://192.168.85.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W1014 14:33:49.298538       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1014 14:33:56.820436       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [b68f537421d89ef1144a9d150d1fb404e0cf70cf140e25288a5584f08e599341] <==
	E1014 14:25:36.989123       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1014 14:25:37.034790       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-x5x6d"
	I1014 14:25:37.040943       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I1014 14:25:37.048036       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-nj7wx"
	I1014 14:25:37.052779       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-8f22s"
	I1014 14:25:37.078858       1 shared_informer.go:247] Caches are synced for resource quota 
	E1014 14:25:37.101154       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I1014 14:25:37.113564       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-ghpcx"
	I1014 14:25:37.123609       1 shared_informer.go:247] Caches are synced for resource quota 
	I1014 14:25:37.143697       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I1014 14:25:37.143823       1 shared_informer.go:247] Caches are synced for endpoint 
	I1014 14:25:37.143902       1 shared_informer.go:247] Caches are synced for endpoint_slice 
	E1014 14:25:37.168553       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"c97006db-bf2b-4178-9156-b6d152384289", ResourceVersion:"261", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864512720, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400146cc60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400146cc80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x400146cca0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000ecd280), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400146c
cc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400146cce0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400146cd20)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000e11500), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40004fcdb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a2bf10), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbb68)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40004fd148)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E1014 14:25:37.170786       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E1014 14:25:37.205712       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"45515951-fef1-4313-b792-523763e99310", ResourceVersion:"274", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864512721, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400146cd80), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400146cda0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400146cdc0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string
{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400146cde0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil),
FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400146ce00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Glust
erfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400146ce20), EmptyDi
r:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil),
PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400146ce40)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400146ce80)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:
0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:
(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000e11860), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4000612808), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a72000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}},
HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbb78)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4000612880)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1014 14:25:37.213330       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	E1014 14:25:37.253182       1 daemon_controller.go:320] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"45515951-fef1-4313-b792-523763e99310", ResourceVersion:"396", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63864512721, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"},\"name\":\"kindnet\",\"namespace\":\"kube-system\"},\"spec\":{\"selector\":{\"matchLabels\":{\"app\":\"k
indnet\"}},\"template\":{\"metadata\":{\"labels\":{\"app\":\"kindnet\",\"k8s-app\":\"kindnet\",\"tier\":\"node\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"HOST_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.hostIP\"}}},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}},{\"name\":\"POD_SUBNET\",\"value\":\"10.244.0.0/16\"}],\"image\":\"docker.io/kindest/kindnetd:v20241007-36f62932\",\"name\":\"kindnet-cni\",\"resources\":{\"limits\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"},\"requests\":{\"cpu\":\"100m\",\"memory\":\"50Mi\"}},\"securityContext\":{\"capabilities\":{\"add\":[\"NET_RAW\",\"NET_ADMIN\"]},\"privileged\":false},\"volumeMounts\":[{\"mountPath\":\"/etc/cni/net.d\",\"name\":\"cni-cfg\"},{\"mountPath\":\"/run/xtables.lock\",\"name\":\"xtables-lock\",\"readOnly\":false},{\"mountPath\":\"/lib/modules\",\"name\":\"lib-modules\",\"readOnly\":true}]}],\"hostNetwork\":true,\"serviceAccountName\":\"kindnet\",\"tolerations\":[{\"effect\":\"NoSchedule\",\"operator\
":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/etc/cni/net.d\",\"type\":\"DirectoryOrCreate\"},\"name\":\"cni-cfg\"},{\"hostPath\":{\"path\":\"/run/xtables.lock\",\"type\":\"FileOrCreate\"},\"name\":\"xtables-lock\"},{\"hostPath\":{\"path\":\"/lib/modules\"},\"name\":\"lib-modules\"}]}}}}\n"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001f0c960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001f0c980)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001f0c9a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001f0c9c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4001f0c9e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generatio
n:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001f0ca00), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:
(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001f0ca20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlo
ckStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CS
I:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001f0ca40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Q
uobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"docker.io/kindest/kindnetd:v20241007-36f62932", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f0ca60)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001f0caa0)}, v1.EnvVar{Name:"POD_SUBNET", Value:"10.244.0.0/16", ValueFrom:(*v1.EnvVarSource)(nil)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i
:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", Sub
Path:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001efaba0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x4001e7f558), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000509dc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinit
y:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x400189f0a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x4001e7f5a0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v
1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
	I1014 14:25:37.513511       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1014 14:25:37.537432       1 shared_informer.go:247] Caches are synced for garbage collector 
	I1014 14:25:37.537454       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1014 14:25:38.619630       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I1014 14:25:38.655981       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-ghpcx"
	I1014 14:25:41.851653       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I1014 14:27:21.065852       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E1014 14:27:21.259772       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [2ad5db99d8a9f7057e1fd778bf0ca4e5c68f3f9822b3a1bb41bc768d030608e2] <==
	I1014 14:25:38.056041       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1014 14:25:38.056144       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1014 14:25:38.113681       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1014 14:25:38.113779       1 server_others.go:185] Using iptables Proxier.
	I1014 14:25:38.113988       1 server.go:650] Version: v1.20.0
	I1014 14:25:38.114497       1 config.go:315] Starting service config controller
	I1014 14:25:38.114507       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1014 14:25:38.116866       1 config.go:224] Starting endpoint slice config controller
	I1014 14:25:38.116879       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1014 14:25:38.214680       1 shared_informer.go:247] Caches are synced for service config 
	I1014 14:25:38.217004       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [d02c985f81647cbe7e1a590dd7acca3343a87193b8a80f164dece0f9e2c5c560] <==
	I1014 14:28:07.334556       1 node.go:172] Successfully retrieved node IP: 192.168.85.2
	I1014 14:28:07.334633       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.85.2), assume IPv4 operation
	W1014 14:28:07.350742       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I1014 14:28:07.351010       1 server_others.go:185] Using iptables Proxier.
	I1014 14:28:07.351374       1 server.go:650] Version: v1.20.0
	I1014 14:28:07.352538       1 config.go:315] Starting service config controller
	I1014 14:28:07.352724       1 shared_informer.go:240] Waiting for caches to sync for service config
	I1014 14:28:07.352835       1 config.go:224] Starting endpoint slice config controller
	I1014 14:28:07.352921       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I1014 14:28:07.452962       1 shared_informer.go:247] Caches are synced for service config 
	I1014 14:28:07.453159       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [2ffd812c6a8f3a0c8955aa0bc1dcda72cd2b449d10f74400dd5f13a554aa6d85] <==
	I1014 14:27:56.530937       1 serving.go:331] Generated self-signed cert in-memory
	I1014 14:28:05.121613       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1014 14:28:05.122861       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1014 14:28:05.122873       1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1014 14:28:05.122889       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1014 14:28:05.135478       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 14:28:05.135508       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 14:28:05.135534       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 14:28:05.135538       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1014 14:28:05.225148       1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
	I1014 14:28:05.237429       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
	I1014 14:28:05.237494       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [c0e7973b2a3d77c06ba4b0697deba8dc30ded1bfe798e7724746ed6da918124a] <==
	I1014 14:25:12.747714       1 serving.go:331] Generated self-signed cert in-memory
	W1014 14:25:17.750551       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1014 14:25:17.750778       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1014 14:25:17.750862       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1014 14:25:17.750938       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1014 14:25:17.839333       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I1014 14:25:17.845756       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I1014 14:25:17.845833       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1014 14:25:17.845840       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1014 14:25:17.850208       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1014 14:25:17.851140       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1014 14:25:17.853357       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1014 14:25:17.853640       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:25:17.853719       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1014 14:25:17.853786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1014 14:25:17.853838       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1014 14:25:17.853904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1014 14:25:17.853963       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1014 14:25:17.857832       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1014 14:25:17.859221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1014 14:25:17.859300       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1014 14:25:18.704180       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1014 14:25:18.717118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1014 14:25:19.445987       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: I1014 14:32:31.308965     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:32:31 old-k8s-version-805757 kubelet[663]: E1014 14:32:31.309863     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:32:41 old-k8s-version-805757 kubelet[663]: E1014 14:32:41.312511     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: I1014 14:32:42.308923     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:32:42 old-k8s-version-805757 kubelet[663]: E1014 14:32:42.309556     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: I1014 14:32:53.309531     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:32:53 old-k8s-version-805757 kubelet[663]: E1014 14:32:53.310344     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:32:56 old-k8s-version-805757 kubelet[663]: E1014 14:32:56.309439     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: I1014 14:33:08.308799     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:33:08 old-k8s-version-805757 kubelet[663]: E1014 14:33:08.309189     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:33:11 old-k8s-version-805757 kubelet[663]: E1014 14:33:11.309693     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: I1014 14:33:21.308788     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:33:21 old-k8s-version-805757 kubelet[663]: E1014 14:33:21.309193     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:33:23 old-k8s-version-805757 kubelet[663]: E1014 14:33:23.310309     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: I1014 14:33:33.309290     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:33:33 old-k8s-version-805757 kubelet[663]: E1014 14:33:33.309615     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:33:37 old-k8s-version-805757 kubelet[663]: E1014 14:33:37.309694     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 14 14:33:45 old-k8s-version-805757 kubelet[663]: I1014 14:33:45.309233     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:33:45 old-k8s-version-805757 kubelet[663]: E1014 14:33:45.309634     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	Oct 14 14:33:51 old-k8s-version-805757 kubelet[663]: E1014 14:33:51.329685     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Oct 14 14:33:51 old-k8s-version-805757 kubelet[663]: E1014 14:33:51.329742     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Oct 14 14:33:51 old-k8s-version-805757 kubelet[663]: E1014 14:33:51.330221     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-tjn68,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-zks7j_kube-system(5dd1007
f-cc13-48d0-801d-f8f22505e114): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host
	Oct 14 14:33:51 old-k8s-version-805757 kubelet[663]: E1014 14:33:51.330270     663 pod_workers.go:191] Error syncing pod 5dd1007f-cc13-48d0-801d-f8f22505e114 ("metrics-server-9975d5f86-zks7j_kube-system(5dd1007f-cc13-48d0-801d-f8f22505e114)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Oct 14 14:33:57 old-k8s-version-805757 kubelet[663]: I1014 14:33:57.308895     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: dc5088c9224e75b2683c4bb24ca7da4f6d319228071e6e905cb675161ce0de88
	Oct 14 14:33:57 old-k8s-version-805757 kubelet[663]: E1014 14:33:57.309769     663 pod_workers.go:191] Error syncing pod e8f619e3-d6c9-403e-9ed3-a142362c9b2a ("dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-xjhz5_kubernetes-dashboard(e8f619e3-d6c9-403e-9ed3-a142362c9b2a)"
	
	
	==> kubernetes-dashboard [d77ae50b9c30cf70a9c9234c8e136c5bbfbc2df275ea64fc301df3a39321a592] <==
	2024/10/14 14:28:30 Starting overwatch
	2024/10/14 14:28:30 Using namespace: kubernetes-dashboard
	2024/10/14 14:28:30 Using in-cluster config to connect to apiserver
	2024/10/14 14:28:30 Using secret token for csrf signing
	2024/10/14 14:28:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/10/14 14:28:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/10/14 14:28:30 Successful initial request to the apiserver, version: v1.20.0
	2024/10/14 14:28:30 Generating JWE encryption key
	2024/10/14 14:28:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/10/14 14:28:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/10/14 14:28:30 Initializing JWE encryption key from synchronized object
	2024/10/14 14:28:30 Creating in-cluster Sidecar client
	2024/10/14 14:28:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:28:31 Serving insecurely on HTTP port: 9090
	2024/10/14 14:29:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:29:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:30:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:30:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:31:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:31:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:32:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:32:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:33:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/10/14 14:33:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [72aa351ee44e0112bfe7f6c5290b71f617c5f61c9d471cb16a86fa4783a604cc] <==
	I1014 14:28:07.290543       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1014 14:28:37.293454       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9fd76e6e07f511d2eda883f06e9d54e137a9eee4cae7837d973fd4688fa6ef98] <==
	I1014 14:28:52.419689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1014 14:28:52.434666       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1014 14:28:52.434782       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1014 14:29:09.987065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1014 14:29:09.987449       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-805757_efd2d873-3172-4624-8ac8-71b886e005a9!
	I1014 14:29:09.987606       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"462c9605-281c-4bda-b044-bc8bae57d97b", APIVersion:"v1", ResourceVersion:"838", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-805757_efd2d873-3172-4624-8ac8-71b886e005a9 became leader
	I1014 14:29:10.092753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-805757_efd2d873-3172-4624-8ac8-71b886e005a9!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-805757 -n old-k8s-version-805757
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-805757 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-zks7j
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-805757 describe pod metrics-server-9975d5f86-zks7j
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-805757 describe pod metrics-server-9975d5f86-zks7j: exit status 1 (101.107491ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-zks7j" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-805757 describe pod metrics-server-9975d5f86-zks7j: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (382.94s)

                                                
                                    

Test pass (299/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.31.1/json-events 9.57
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.33
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 151.19
31 TestAddons/serial/GCPAuth/Namespaces 0.17
32 TestAddons/serial/GCPAuth/PullSecret 8.88
34 TestAddons/parallel/Registry 16.25
35 TestAddons/parallel/Ingress 19.44
36 TestAddons/parallel/InspektorGadget 11.79
37 TestAddons/parallel/MetricsServer 6.79
39 TestAddons/parallel/CSI 61.69
40 TestAddons/parallel/Headlamp 16.28
41 TestAddons/parallel/CloudSpanner 6.69
42 TestAddons/parallel/LocalPath 8.84
43 TestAddons/parallel/NvidiaDevicePlugin 5.69
44 TestAddons/parallel/Yakd 11.84
46 TestAddons/StoppedEnableDisable 12.32
47 TestCertOptions 38.77
48 TestCertExpiration 228.46
50 TestForceSystemdFlag 39.34
51 TestForceSystemdEnv 41.95
52 TestDockerEnvContainerd 44.19
57 TestErrorSpam/setup 32.12
58 TestErrorSpam/start 0.71
59 TestErrorSpam/status 1.05
60 TestErrorSpam/pause 1.73
61 TestErrorSpam/unpause 1.83
62 TestErrorSpam/stop 1.47
65 TestFunctional/serial/CopySyncFile 0
66 TestFunctional/serial/StartWithProxy 47.3
67 TestFunctional/serial/AuditLog 0
68 TestFunctional/serial/SoftStart 6.28
69 TestFunctional/serial/KubeContext 0.06
70 TestFunctional/serial/KubectlGetPods 0.13
73 TestFunctional/serial/CacheCmd/cache/add_remote 3.88
74 TestFunctional/serial/CacheCmd/cache/add_local 6.28
75 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
76 TestFunctional/serial/CacheCmd/cache/list 0.06
77 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
78 TestFunctional/serial/CacheCmd/cache/cache_reload 1.96
79 TestFunctional/serial/CacheCmd/cache/delete 0.12
80 TestFunctional/serial/MinikubeKubectlCmd 0.14
81 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
82 TestFunctional/serial/ExtraConfig 41.58
83 TestFunctional/serial/ComponentHealth 0.1
84 TestFunctional/serial/LogsCmd 1.74
85 TestFunctional/serial/LogsFileCmd 1.7
86 TestFunctional/serial/InvalidService 4.59
88 TestFunctional/parallel/ConfigCmd 0.48
89 TestFunctional/parallel/DashboardCmd 13.48
90 TestFunctional/parallel/DryRun 0.4
91 TestFunctional/parallel/InternationalLanguage 0.18
92 TestFunctional/parallel/StatusCmd 1
96 TestFunctional/parallel/ServiceCmdConnect 10.62
97 TestFunctional/parallel/AddonsCmd 0.2
98 TestFunctional/parallel/PersistentVolumeClaim 25.14
100 TestFunctional/parallel/SSHCmd 0.68
101 TestFunctional/parallel/CpCmd 2.22
103 TestFunctional/parallel/FileSync 0.32
104 TestFunctional/parallel/CertSync 2.1
108 TestFunctional/parallel/NodeLabels 0.09
110 TestFunctional/parallel/NonActiveRuntimeDisabled 0.63
112 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.41
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ServiceCmd/DeployApp 6.25
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
126 TestFunctional/parallel/ProfileCmd/profile_list 0.4
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.72
128 TestFunctional/parallel/ServiceCmd/List 0.77
129 TestFunctional/parallel/MountCmd/any-port 8.2
130 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
131 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
132 TestFunctional/parallel/ServiceCmd/Format 0.47
133 TestFunctional/parallel/ServiceCmd/URL 0.43
134 TestFunctional/parallel/MountCmd/specific-port 1.97
135 TestFunctional/parallel/MountCmd/VerifyCleanup 1.78
136 TestFunctional/parallel/Version/short 0.08
137 TestFunctional/parallel/Version/components 1.27
138 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
139 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
140 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
141 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
142 TestFunctional/parallel/ImageCommands/ImageBuild 3.84
143 TestFunctional/parallel/ImageCommands/Setup 0.73
144 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.32
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.4
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 116.5
161 TestMultiControlPlane/serial/DeployApp 32.15
162 TestMultiControlPlane/serial/PingHostFromPods 1.67
163 TestMultiControlPlane/serial/AddWorkerNode 22.69
164 TestMultiControlPlane/serial/NodeLabels 0.11
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
166 TestMultiControlPlane/serial/CopyFile 19.07
167 TestMultiControlPlane/serial/StopSecondaryNode 12.82
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
169 TestMultiControlPlane/serial/RestartSecondaryNode 18.9
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.01
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 120.27
172 TestMultiControlPlane/serial/DeleteSecondaryNode 10.8
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
174 TestMultiControlPlane/serial/StopCluster 36
175 TestMultiControlPlane/serial/RestartCluster 78.87
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.79
177 TestMultiControlPlane/serial/AddSecondaryNode 48.46
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
182 TestJSONOutput/start/Command 53.11
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.76
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.69
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.78
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.23
207 TestKicCustomNetwork/create_custom_network 38.55
208 TestKicCustomNetwork/use_default_bridge_network 32.1
209 TestKicExistingNetwork 32.89
210 TestKicCustomSubnet 34.39
211 TestKicStaticIP 34.33
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 72.8
216 TestMountStart/serial/StartWithMountFirst 6.31
217 TestMountStart/serial/VerifyMountFirst 0.26
218 TestMountStart/serial/StartWithMountSecond 5.88
219 TestMountStart/serial/VerifyMountSecond 0.26
220 TestMountStart/serial/DeleteFirst 1.61
221 TestMountStart/serial/VerifyMountPostDelete 0.26
222 TestMountStart/serial/Stop 1.26
223 TestMountStart/serial/RestartStopped 7.35
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 76.2
228 TestMultiNode/serial/DeployApp2Nodes 15.31
229 TestMultiNode/serial/PingHostFrom2Pods 0.99
230 TestMultiNode/serial/AddNode 15.83
231 TestMultiNode/serial/MultiNodeLabels 0.11
232 TestMultiNode/serial/ProfileList 0.68
233 TestMultiNode/serial/CopyFile 10.08
234 TestMultiNode/serial/StopNode 2.22
235 TestMultiNode/serial/StartAfterStop 9.63
236 TestMultiNode/serial/RestartKeepsNodes 94.27
237 TestMultiNode/serial/DeleteNode 5.76
238 TestMultiNode/serial/StopMultiNode 23.99
239 TestMultiNode/serial/RestartMultiNode 52.9
240 TestMultiNode/serial/ValidateNameConflict 32.2
245 TestPreload 123.11
247 TestScheduledStopUnix 108.13
250 TestInsufficientStorage 10.46
251 TestRunningBinaryUpgrade 82.06
253 TestKubernetesUpgrade 348.44
254 TestMissingContainerUpgrade 167.19
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/StartWithK8s 39.46
258 TestNoKubernetes/serial/StartWithStopK8s 19.08
259 TestNoKubernetes/serial/Start 7.03
260 TestNoKubernetes/serial/VerifyK8sNotRunning 0.36
261 TestNoKubernetes/serial/ProfileList 1.28
262 TestNoKubernetes/serial/Stop 1.27
263 TestNoKubernetes/serial/StartNoArgs 8
264 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.48
265 TestStoppedBinaryUpgrade/Setup 0.7
266 TestStoppedBinaryUpgrade/Upgrade 108.14
267 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
276 TestPause/serial/Start 62.19
277 TestPause/serial/SecondStartNoReconfiguration 7.16
278 TestPause/serial/Pause 1.01
279 TestPause/serial/VerifyStatus 0.36
280 TestPause/serial/Unpause 0.77
281 TestPause/serial/PauseAgain 1.12
282 TestPause/serial/DeletePaused 3.53
283 TestPause/serial/VerifyDeletedResources 0.46
291 TestNetworkPlugins/group/false 4.62
296 TestStartStop/group/old-k8s-version/serial/FirstStart 154.03
297 TestStartStop/group/old-k8s-version/serial/DeployApp 10.15
299 TestStartStop/group/no-preload/serial/FirstStart 71.29
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.89
301 TestStartStop/group/old-k8s-version/serial/Stop 14.66
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
304 TestStartStop/group/no-preload/serial/DeployApp 8.42
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.21
306 TestStartStop/group/no-preload/serial/Stop 12.06
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
308 TestStartStop/group/no-preload/serial/SecondStart 303.11
309 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/no-preload/serial/Pause 3.22
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.45
316 TestStartStop/group/old-k8s-version/serial/Pause 3.77
318 TestStartStop/group/embed-certs/serial/FirstStart 58.13
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 69.26
321 TestStartStop/group/embed-certs/serial/DeployApp 9.35
322 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
323 TestStartStop/group/embed-certs/serial/Stop 12.11
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.52
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
326 TestStartStop/group/embed-certs/serial/SecondStart 267.48
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.68
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.58
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
330 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 290.84
331 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
332 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
334 TestStartStop/group/embed-certs/serial/Pause 3.09
336 TestStartStop/group/newest-cni/serial/FirstStart 34.47
337 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.42
341 TestStartStop/group/newest-cni/serial/Stop 1.28
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
343 TestStartStop/group/newest-cni/serial/SecondStart 21.03
344 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
345 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.84
346 TestNetworkPlugins/group/auto/Start 71.56
347 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
349 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
350 TestStartStop/group/newest-cni/serial/Pause 3.43
351 TestNetworkPlugins/group/kindnet/Start 57.87
352 TestNetworkPlugins/group/auto/KubeletFlags 0.36
353 TestNetworkPlugins/group/auto/NetCatPod 9.29
354 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
356 TestNetworkPlugins/group/kindnet/NetCatPod 9.25
357 TestNetworkPlugins/group/auto/DNS 0.19
358 TestNetworkPlugins/group/auto/Localhost 0.21
359 TestNetworkPlugins/group/auto/HairPin 0.17
360 TestNetworkPlugins/group/kindnet/DNS 0.26
361 TestNetworkPlugins/group/kindnet/Localhost 0.22
362 TestNetworkPlugins/group/kindnet/HairPin 0.22
363 TestNetworkPlugins/group/calico/Start 77.95
364 TestNetworkPlugins/group/custom-flannel/Start 58.07
365 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
366 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.28
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/custom-flannel/DNS 0.22
369 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
370 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
371 TestNetworkPlugins/group/calico/KubeletFlags 0.31
372 TestNetworkPlugins/group/calico/NetCatPod 11.27
373 TestNetworkPlugins/group/calico/DNS 0.24
374 TestNetworkPlugins/group/calico/Localhost 0.21
375 TestNetworkPlugins/group/calico/HairPin 0.42
376 TestNetworkPlugins/group/enable-default-cni/Start 50.72
377 TestNetworkPlugins/group/flannel/Start 53.54
378 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.39
379 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.36
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
382 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
385 TestNetworkPlugins/group/flannel/NetCatPod 10.36
386 TestNetworkPlugins/group/bridge/Start 53.3
387 TestNetworkPlugins/group/flannel/DNS 0.47
388 TestNetworkPlugins/group/flannel/Localhost 0.26
389 TestNetworkPlugins/group/flannel/HairPin 0.26
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
391 TestNetworkPlugins/group/bridge/NetCatPod 10.26
392 TestNetworkPlugins/group/bridge/DNS 0.16
393 TestNetworkPlugins/group/bridge/Localhost 0.15
394 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (8.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-191063 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-191063 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.332489309s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1014 13:38:49.712378    7542 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I1014 13:38:49.712457    7542 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-191063
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-191063: exit status 85 (74.153441ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-191063 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |          |
	|         | -p download-only-191063        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:41.426504    7547 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:41.426654    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:41.426665    7547 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:41.426671    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:41.426908    7547 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	W1014 13:38:41.427042    7547 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19790-2229/.minikube/config/config.json: open /home/jenkins/minikube-integration/19790-2229/.minikube/config/config.json: no such file or directory
	I1014 13:38:41.427438    7547 out.go:352] Setting JSON to true
	I1014 13:38:41.428176    7547 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1273,"bootTime":1728911849,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 13:38:41.428247    7547 start.go:139] virtualization:  
	I1014 13:38:41.431291    7547 out.go:97] [download-only-191063] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1014 13:38:41.431483    7547 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball: no such file or directory
	I1014 13:38:41.431516    7547 notify.go:220] Checking for updates...
	I1014 13:38:41.433228    7547 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:41.434895    7547 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:41.436667    7547 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:38:41.438950    7547 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 13:38:41.440808    7547 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 13:38:41.445122    7547 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:41.445359    7547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:41.470264    7547 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:38:41.470371    7547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:41.837602    7547 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:38:41.825409724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:41.837705    7547 docker.go:318] overlay module found
	I1014 13:38:41.839915    7547 out.go:97] Using the docker driver based on user configuration
	I1014 13:38:41.839942    7547 start.go:297] selected driver: docker
	I1014 13:38:41.839949    7547 start.go:901] validating driver "docker" against <nil>
	I1014 13:38:41.840055    7547 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:41.888385    7547 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:38:41.879443207 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:41.888608    7547 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:41.888915    7547 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1014 13:38:41.889124    7547 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:41.891943    7547 out.go:169] Using Docker driver with root privileges
	I1014 13:38:41.894096    7547 cni.go:84] Creating CNI manager for ""
	I1014 13:38:41.894161    7547 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 13:38:41.894175    7547 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:41.894261    7547 start.go:340] cluster config:
	{Name:download-only-191063 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-191063 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:41.896819    7547 out.go:97] Starting "download-only-191063" primary control-plane node in "download-only-191063" cluster
	I1014 13:38:41.896842    7547 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1014 13:38:41.899109    7547 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:38:41.899141    7547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1014 13:38:41.899285    7547 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:38:41.913484    7547 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:41.913689    7547 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:38:41.913797    7547 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:41.955287    7547 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I1014 13:38:41.955335    7547 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:41.955520    7547 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I1014 13:38:41.958010    7547 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1014 13:38:41.958040    7547 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I1014 13:38:42.040476    7547 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-191063 host does not exist
	  To start a cluster, run: "minikube start -p download-only-191063"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-191063
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (9.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-532133 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-532133 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.568040361s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (9.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1014 13:38:59.720883    7542 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I1014 13:38:59.720921    7542 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-532133
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-532133: exit status 85 (76.749418ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-191063 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-191063        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| delete  | -p download-only-191063        | download-only-191063 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC | 14 Oct 24 13:38 UTC |
	| start   | -o=json --download-only        | download-only-532133 | jenkins | v1.34.0 | 14 Oct 24 13:38 UTC |                     |
	|         | -p download-only-532133        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/14 13:38:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.1 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1014 13:38:50.204703    7751 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:38:50.205403    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:50.205454    7751 out.go:358] Setting ErrFile to fd 2...
	I1014 13:38:50.205477    7751 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:38:50.205793    7751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:38:50.206265    7751 out.go:352] Setting JSON to true
	I1014 13:38:50.207077    7751 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1282,"bootTime":1728911849,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 13:38:50.207186    7751 start.go:139] virtualization:  
	I1014 13:38:50.210152    7751 out.go:97] [download-only-532133] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:38:50.210493    7751 notify.go:220] Checking for updates...
	I1014 13:38:50.213126    7751 out.go:169] MINIKUBE_LOCATION=19790
	I1014 13:38:50.215138    7751 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:38:50.217338    7751 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:38:50.219320    7751 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 13:38:50.221109    7751 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1014 13:38:50.224691    7751 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1014 13:38:50.224962    7751 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:38:50.247186    7751 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:38:50.247297    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:50.313992    7751 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-14 13:38:50.303941521 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:50.314110    7751 docker.go:318] overlay module found
	I1014 13:38:50.316046    7751 out.go:97] Using the docker driver based on user configuration
	I1014 13:38:50.316084    7751 start.go:297] selected driver: docker
	I1014 13:38:50.316091    7751 start.go:901] validating driver "docker" against <nil>
	I1014 13:38:50.316191    7751 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:38:50.368254    7751 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-14 13:38:50.359242536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:38:50.368486    7751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1014 13:38:50.368779    7751 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1014 13:38:50.368950    7751 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1014 13:38:50.370914    7751 out.go:169] Using Docker driver with root privileges
	I1014 13:38:50.372744    7751 cni.go:84] Creating CNI manager for ""
	I1014 13:38:50.372799    7751 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1014 13:38:50.372813    7751 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1014 13:38:50.372891    7751 start.go:340] cluster config:
	{Name:download-only-532133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-532133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:38:50.375094    7751 out.go:97] Starting "download-only-532133" primary control-plane node in "download-only-532133" cluster
	I1014 13:38:50.375116    7751 cache.go:121] Beginning downloading kic base image for docker with containerd
	I1014 13:38:50.377345    7751 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1014 13:38:50.377371    7751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 13:38:50.377396    7751 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1014 13:38:50.391874    7751 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1014 13:38:50.392064    7751 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1014 13:38:50.392090    7751 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1014 13:38:50.392098    7751 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1014 13:38:50.392109    7751 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1014 13:38:50.428556    7751 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I1014 13:38:50.428591    7751 cache.go:56] Caching tarball of preloaded images
	I1014 13:38:50.428754    7751 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I1014 13:38:50.431343    7751 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1014 13:38:50.431371    7751 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I1014 13:38:50.515243    7751 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19790-2229/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-532133 host does not exist
	  To start a cluster, run: "minikube start -p download-only-532133"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-532133
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.33s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
I1014 13:39:01.211534    7542 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-153103 --alsologtostderr --binary-mirror http://127.0.0.1:37259 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-153103" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-153103
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-569374
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-569374: exit status 85 (77.649036ms)

                                                
                                                
-- stdout --
	* Profile "addons-569374" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-569374"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-569374
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-569374: exit status 85 (66.355287ms)

                                                
                                                
-- stdout --
	* Profile "addons-569374" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-569374"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (151.19s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-569374 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-569374 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m31.1858694s)
--- PASS: TestAddons/Setup (151.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-569374 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-569374 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/PullSecret (8.88s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-569374 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-569374 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a9ed0196-b810-42b2-b6e8-abeab2f8b249] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a9ed0196-b810-42b2-b6e8-abeab2f8b249] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: integration-test=busybox healthy within 8.003598681s
addons_test.go:633: (dbg) Run:  kubectl --context addons-569374 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-569374 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-569374 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-569374 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/PullSecret (8.88s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 4.739617ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-zcf42" [6251ea02-1362-47e1-ac9b-c623c958b8ea] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.011685952s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-kcr2s" [2653d442-3145-422a-9e46-69f2ced9ccf9] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003761634s
addons_test.go:331: (dbg) Run:  kubectl --context addons-569374 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-569374 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-569374 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.264730717s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 ip
2024/10/14 13:45:37 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-569374 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-569374 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-569374 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [d7ce4ec2-e798-448b-ab9e-e3c16d08f8cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [d7ce4ec2-e798-448b-ab9e-e3c16d08f8cd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003796254s
I1014 13:46:29.870153    7542 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-569374 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable ingress-dns --alsologtostderr -v=1: (1.811399773s)
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable ingress --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable ingress --alsologtostderr -v=1: (7.884688961s)
--- PASS: TestAddons/parallel/Ingress (19.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5g42s" [cfe9babd-4d6c-4651-a796-d3112e61c96e] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003880274s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable inspektor-gadget --alsologtostderr -v=1: (5.784301433s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.546975ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7jpwg" [7fa7d567-36e6-474d-89c2-177f4ac21f68] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00324676s
addons_test.go:402: (dbg) Run:  kubectl --context addons-569374 top pods -n kube-system
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (61.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1014 13:45:47.224586    7542 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1014 13:45:47.230108    7542 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1014 13:45:47.230133    7542 kapi.go:107] duration metric: took 7.578494ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 7.58688ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-569374 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-569374 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a9671b20-da89-4959-ac74-f450481ccd21] Pending
helpers_test.go:344: "task-pv-pod" [a9671b20-da89-4959-ac74-f450481ccd21] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a9671b20-da89-4959-ac74-f450481ccd21] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004353669s
addons_test.go:511: (dbg) Run:  kubectl --context addons-569374 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-569374 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-569374 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-569374 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-569374 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-569374 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-569374 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [23990997-8ea2-43db-a9cf-1874d16e7b97] Pending
helpers_test.go:344: "task-pv-pod-restore" [23990997-8ea2-43db-a9cf-1874d16e7b97] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [23990997-8ea2-43db-a9cf-1874d16e7b97] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003665608s
addons_test.go:553: (dbg) Run:  kubectl --context addons-569374 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-569374 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-569374 delete volumesnapshot new-snapshot-demo
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.860413627s)
--- PASS: TestAddons/parallel/CSI (61.69s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-569374 --alsologtostderr -v=1
addons_test.go:743: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-569374 --alsologtostderr -v=1: (1.508189205s)
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-kltcv" [7035b660-34e3-4ce3-9b07-e4e75ecadb8a] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-kltcv" [7035b660-34e3-4ce3-9b07-e4e75ecadb8a] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-kltcv" [7035b660-34e3-4ce3-9b07-e4e75ecadb8a] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003655194s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable headlamp --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable headlamp --alsologtostderr -v=1: (5.767293676s)
--- PASS: TestAddons/parallel/Headlamp (16.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-9vgs9" [45c147dd-706f-4599-bf7a-0c0915752aa5] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004118374s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.84s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-569374 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-569374 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [bafd7cd1-7677-4418-a26d-374f2654a466] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [bafd7cd1-7677-4418-a26d-374f2654a466] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [bafd7cd1-7677-4418-a26d-374f2654a466] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.008064401s
addons_test.go:902: (dbg) Run:  kubectl --context addons-569374 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 ssh "cat /opt/local-path-provisioner/pvc-3ff263b8-c078-4c94-86f9-67e829c847f7_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-569374 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-569374 delete pvc test-pvc
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.84s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-spwkr" [3a139f01-b9d1-463a-ad5c-3a03da931e90] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003959117s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.69s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-mdcc8" [3f4a87e4-2dcb-430a-b296-540c5b710fd7] Running
addons_test.go:982: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004124105s
addons_test.go:988: (dbg) Run:  out/minikube-linux-arm64 -p addons-569374 addons disable yakd --alsologtostderr -v=1
addons_test.go:988: (dbg) Done: out/minikube-linux-arm64 -p addons-569374 addons disable yakd --alsologtostderr -v=1: (5.838835687s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-569374
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-569374: (12.038260414s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-569374
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-569374
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-569374
--- PASS: TestAddons/StoppedEnableDisable (12.32s)

                                                
                                    
x
+
TestCertOptions (38.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-897597 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-897597 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.123278326s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-897597 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-897597 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-897597 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-897597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-897597
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-897597: (1.991525501s)
--- PASS: TestCertOptions (38.77s)

                                                
                                    
x
+
TestCertExpiration (228.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-007181 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-007181 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.070527377s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-007181 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-007181 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.096859564s)
helpers_test.go:175: Cleaning up "cert-expiration-007181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-007181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-007181: (2.289749053s)
--- PASS: TestCertExpiration (228.46s)

                                                
                                    
x
+
TestForceSystemdFlag (39.34s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-418551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-418551 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.765855092s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-418551 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-418551" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-418551
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-418551: (2.206105247s)
--- PASS: TestForceSystemdFlag (39.34s)

                                                
                                    
x
+
TestForceSystemdEnv (41.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-594800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-594800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.053058141s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-594800 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-594800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-594800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-594800: (2.473546098s)
--- PASS: TestForceSystemdEnv (41.95s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.19s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-166237 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-166237 --driver=docker  --container-runtime=containerd: (28.625156283s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-166237"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vIBQA3vpQzSf/agent.29416" SSH_AGENT_PID="29417" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vIBQA3vpQzSf/agent.29416" SSH_AGENT_PID="29417" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vIBQA3vpQzSf/agent.29416" SSH_AGENT_PID="29417" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.182745972s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-vIBQA3vpQzSf/agent.29416" SSH_AGENT_PID="29417" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-166237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-166237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-166237: (1.971966758s)
--- PASS: TestDockerEnvContainerd (44.19s)

                                                
                                    
x
+
TestErrorSpam/setup (32.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-476513 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-476513 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-476513 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-476513 --driver=docker  --container-runtime=containerd: (32.123502829s)
--- PASS: TestErrorSpam/setup (32.12s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 pause
--- PASS: TestErrorSpam/pause (1.73s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 unpause
--- PASS: TestErrorSpam/unpause (1.83s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 stop: (1.27643284s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-476513 --log_dir /tmp/nospam-476513 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19790-2229/.minikube/files/etc/test/nested/copy/7542/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-729396 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (47.302035545s)
--- PASS: TestFunctional/serial/StartWithProxy (47.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.28s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1014 13:49:22.597124    7542 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-729396 --alsologtostderr -v=8: (6.273518239s)
functional_test.go:663: soft start took 6.275025036s for "functional-729396" cluster.
I1014 13:49:28.871154    7542 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.28s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-729396 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:3.1: (1.488463851s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:3.3: (1.28827197s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 cache add registry.k8s.io/pause:latest: (1.102817476s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (6.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-729396 /tmp/TestFunctionalserialCacheCmdcacheadd_local2643485562/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache add minikube-local-cache-test:functional-729396
functional_test.go:1089: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 cache add minikube-local-cache-test:functional-729396: (5.790256257s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache delete minikube-local-cache-test:functional-729396
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-729396
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (6.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (334.286585ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 kubectl -- --context functional-729396 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-729396 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-729396 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.575116702s)
functional_test.go:761: restart took 41.575220126s for "functional-729396" cluster.
I1014 13:50:23.542366    7542 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (41.58s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-729396 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 logs: (1.743631251s)
--- PASS: TestFunctional/serial/LogsCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 logs --file /tmp/TestFunctionalserialLogsFileCmd2090988089/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 logs --file /tmp/TestFunctionalserialLogsFileCmd2090988089/001/logs.txt: (1.694924148s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-729396 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-729396
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-729396: exit status 115 (785.633715ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32701 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-729396 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 config get cpus: exit status 14 (82.726547ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 config get cpus: exit status 14 (73.035306ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-729396 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-729396 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 44064: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.48s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-729396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (176.792747ms)

                                                
                                                
-- stdout --
	* [functional-729396] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:51:04.124603   43678 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:51:04.124831   43678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:51:04.124865   43678 out.go:358] Setting ErrFile to fd 2...
	I1014 13:51:04.124884   43678 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:51:04.125242   43678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:51:04.125735   43678 out.go:352] Setting JSON to false
	I1014 13:51:04.126848   43678 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2016,"bootTime":1728911849,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 13:51:04.126995   43678 start.go:139] virtualization:  
	I1014 13:51:04.130006   43678 out.go:177] * [functional-729396] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 13:51:04.132508   43678 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:51:04.132551   43678 notify.go:220] Checking for updates...
	I1014 13:51:04.134724   43678 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:51:04.136899   43678 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:51:04.138715   43678 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 13:51:04.140863   43678 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:51:04.142985   43678 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:51:04.145497   43678 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:51:04.146089   43678 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:51:04.166866   43678 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:51:04.167011   43678 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:51:04.227164   43678 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:51:04.216963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:51:04.227272   43678 docker.go:318] overlay module found
	I1014 13:51:04.230460   43678 out.go:177] * Using the docker driver based on existing profile
	I1014 13:51:04.232556   43678 start.go:297] selected driver: docker
	I1014 13:51:04.232579   43678 start.go:901] validating driver "docker" against &{Name:functional-729396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-729396 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:51:04.232703   43678 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:51:04.235387   43678 out.go:201] 
	W1014 13:51:04.237618   43678 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1014 13:51:04.240019   43678 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-729396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-729396 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (184.611731ms)

                                                
                                                
-- stdout --
	* [functional-729396] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:51:03.945938   43634 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:51:03.946108   43634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:51:03.946138   43634 out.go:358] Setting ErrFile to fd 2...
	I1014 13:51:03.946159   43634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:51:03.946969   43634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:51:03.947387   43634 out.go:352] Setting JSON to false
	I1014 13:51:03.948324   43634 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2015,"bootTime":1728911849,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 13:51:03.948424   43634 start.go:139] virtualization:  
	I1014 13:51:03.951220   43634 out.go:177] * [functional-729396] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1014 13:51:03.953154   43634 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 13:51:03.953235   43634 notify.go:220] Checking for updates...
	I1014 13:51:03.956678   43634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 13:51:03.958583   43634 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 13:51:03.960267   43634 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 13:51:03.961929   43634 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 13:51:03.963837   43634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 13:51:03.966263   43634 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:51:03.966834   43634 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 13:51:03.990705   43634 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 13:51:03.990822   43634 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:51:04.052435   43634 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 13:51:04.042302494 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:51:04.052544   43634 docker.go:318] overlay module found
	I1014 13:51:04.054511   43634 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1014 13:51:04.056344   43634 start.go:297] selected driver: docker
	I1014 13:51:04.056361   43634 start.go:901] validating driver "docker" against &{Name:functional-729396 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-729396 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1014 13:51:04.056474   43634 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 13:51:04.058857   43634 out.go:201] 
	W1014 13:51:04.060861   43634 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1014 13:51:04.062426   43634 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-729396 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-729396 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-2m4vk" [01812b72-01d9-4902-9850-a986eb6f7421] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-2m4vk" [01812b72-01d9-4902-9850-a986eb6f7421] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004636993s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32357
functional_test.go:1675: http://192.168.49.2:32357: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-2m4vk

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32357
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7b87bcde-d4cb-4b93-886a-86b03b4eca2f] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004035189s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-729396 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-729396 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-729396 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-729396 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [5651349f-2e78-458b-9e95-9c5682765fb1] Pending
helpers_test.go:344: "sp-pod" [5651349f-2e78-458b-9e95-9c5682765fb1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [5651349f-2e78-458b-9e95-9c5682765fb1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004720951s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-729396 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-729396 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-729396 delete -f testdata/storage-provisioner/pod.yaml: (1.156254837s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-729396 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [495a1144-1c85-4e86-8464-0a0d33a0d5dd] Pending
helpers_test.go:344: "sp-pod" [495a1144-1c85-4e86-8464-0a0d33a0d5dd] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004068928s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-729396 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh -n functional-729396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cp functional-729396:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd279001286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh -n functional-729396 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh -n functional-729396 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.22s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7542/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /etc/test/nested/copy/7542/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7542.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /etc/ssl/certs/7542.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7542.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /usr/share/ca-certificates/7542.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /etc/ssl/certs/75422.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75422.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /usr/share/ca-certificates/75422.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-729396 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh "sudo systemctl is-active docker": exit status 1 (329.057799ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh "sudo systemctl is-active crio": exit status 1 (301.351711ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 41151: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-729396 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f82cb929-1a42-4f3b-8898-bc2178b8210f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f82cb929-1a42-4f3b-8898-bc2178b8210f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003630443s
I1014 13:50:42.894586    7542 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-729396 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.63.122 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-729396 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-729396 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-729396 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-n6zrd" [5e2d0c9d-7dc5-43c6-8b1b-ca36b2daff7d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-n6zrd" [5e2d0c9d-7dc5-43c6-8b1b-ca36b2daff7d] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.038657744s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "344.286106ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "51.964662ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "619.356455ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "98.003431ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdany-port41304198/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728913860573810135" to /tmp/TestFunctionalparallelMountCmdany-port41304198/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728913860573810135" to /tmp/TestFunctionalparallelMountCmdany-port41304198/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728913860573810135" to /tmp/TestFunctionalparallelMountCmdany-port41304198/001/test-1728913860573810135
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (416.359695ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:51:00.991240    7542 retry.go:31] will retry after 416.055535ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 14 13:51 test-1728913860573810135
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh cat /mount-9p/test-1728913860573810135
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-729396 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [1e416a30-b667-4847-8002-c3c98e12ecc0] Pending
helpers_test.go:344: "busybox-mount" [1e416a30-b667-4847-8002-c3c98e12ecc0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [1e416a30-b667-4847-8002-c3c98e12ecc0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [1e416a30-b667-4847-8002-c3c98e12ecc0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003723766s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-729396 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdany-port41304198/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service list -o json
functional_test.go:1494: Took "548.859967ms" to run "out/minikube-linux-arm64 -p functional-729396 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32083
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32083
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdspecific-port1083865074/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (371.218954ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1014 13:51:09.144908    7542 retry.go:31] will retry after 453.430465ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdspecific-port1083865074/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh "sudo umount -f /mount-9p": exit status 1 (293.708759ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-729396 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdspecific-port1083865074/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.97s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-729396 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-729396 /tmp/TestFunctionalparallelMountCmdVerifyCleanup598491532/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 version -o=json --components: (1.2661521s)
--- PASS: TestFunctional/parallel/Version/components (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-729396 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-729396
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-729396
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-729396 image ls --format short --alsologtostderr:
I1014 13:51:21.059349   46536 out.go:345] Setting OutFile to fd 1 ...
I1014 13:51:21.059498   46536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.059504   46536 out.go:358] Setting ErrFile to fd 2...
I1014 13:51:21.059508   46536 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.059755   46536 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
I1014 13:51:21.060514   46536 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.060927   46536 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.061560   46536 cli_runner.go:164] Run: docker container inspect functional-729396 --format={{.State.Status}}
I1014 13:51:21.093608   46536 ssh_runner.go:195] Run: systemctl --version
I1014 13:51:21.093701   46536 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-729396
I1014 13:51:21.118601   46536 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/functional-729396/id_rsa Username:docker}
I1014 13:51:21.209814   46536 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-729396 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kicbase/echo-server               | functional-729396  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-729396  | sha256:970fd1 | 991B   |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| docker.io/library/nginx                     | alpine             | sha256:577a23 | 21.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20241007-36f62932 | sha256:0bcd66 | 35.3MB |
| docker.io/library/nginx                     | latest             | sha256:048e09 | 69.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-729396 image ls --format table --alsologtostderr:
I1014 13:51:22.023322   46750 out.go:345] Setting OutFile to fd 1 ...
I1014 13:51:22.023512   46750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:22.023526   46750 out.go:358] Setting ErrFile to fd 2...
I1014 13:51:22.023532   46750 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:22.023930   46750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
I1014 13:51:22.024733   46750 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:22.025467   46750 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:22.026065   46750 cli_runner.go:164] Run: docker container inspect functional-729396 --format={{.State.Status}}
I1014 13:51:22.045276   46750 ssh_runner.go:195] Run: systemctl --version
I1014 13:51:22.045333   46750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-729396
I1014 13:51:22.062755   46750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/functional-729396/id_rsa Username:docker}
I1014 13:51:22.154038   46750 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-729396 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e569905
7e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-729396"],"size":"2173567"},{"id":"sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431","repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250"],"repoTags":["docker.io/library/nginx:alpine"
],"size":"21533923"},{"id":"sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9","repoDigests":["docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"69600401"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8
e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"35320503"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"si
ze":"23948670"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:970fd156b69a277add3bf37a290adac48d59720b699d4c6e28450c7d30505ae9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-729396"],"size":"991"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-729396 image ls --format json --alsologtostderr:
I1014 13:51:21.723149   46674 out.go:345] Setting OutFile to fd 1 ...
I1014 13:51:21.723358   46674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.723389   46674 out.go:358] Setting ErrFile to fd 2...
I1014 13:51:21.723410   46674 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.723677   46674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
I1014 13:51:21.724315   46674 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.724480   46674 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.724991   46674 cli_runner.go:164] Run: docker container inspect functional-729396 --format={{.State.Status}}
I1014 13:51:21.745902   46674 ssh_runner.go:195] Run: systemctl --version
I1014 13:51:21.745957   46674 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-729396
I1014 13:51:21.765944   46674 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/functional-729396/id_rsa Username:docker}
I1014 13:51:21.873684   46674 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-729396 image ls --format yaml --alsologtostderr:
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:577a23b5858b94a1a92e4263bd5d6da99fbd997fb9839bc0f357c9d4b858f431
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
repoTags:
- docker.io/library/nginx:alpine
size: "21533923"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "35320503"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:970fd156b69a277add3bf37a290adac48d59720b699d4c6e28450c7d30505ae9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-729396
size: "991"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-729396
size: "2173567"
- id: sha256:048e09038596626fc38392bfd1b77ac8d5a0d6d0183b513290307d4451bc44b9
repoDigests:
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "69600401"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-729396 image ls --format yaml --alsologtostderr:
I1014 13:51:21.331002   46582 out.go:345] Setting OutFile to fd 1 ...
I1014 13:51:21.331231   46582 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.331273   46582 out.go:358] Setting ErrFile to fd 2...
I1014 13:51:21.331297   46582 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.331639   46582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
I1014 13:51:21.332490   46582 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.332708   46582 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.333365   46582 cli_runner.go:164] Run: docker container inspect functional-729396 --format={{.State.Status}}
I1014 13:51:21.355659   46582 ssh_runner.go:195] Run: systemctl --version
I1014 13:51:21.355740   46582 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-729396
I1014 13:51:21.373830   46582 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/functional-729396/id_rsa Username:docker}
I1014 13:51:21.465651   46582 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-729396 ssh pgrep buildkitd: exit status 1 (320.302511ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image build -t localhost/my-image:functional-729396 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 image build -t localhost/my-image:functional-729396 testdata/build --alsologtostderr: (3.273001866s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-729396 image build -t localhost/my-image:functional-729396 testdata/build --alsologtostderr:
I1014 13:51:21.915012   46730 out.go:345] Setting OutFile to fd 1 ...
I1014 13:51:21.915291   46730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.915332   46730 out.go:358] Setting ErrFile to fd 2...
I1014 13:51:21.915354   46730 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1014 13:51:21.915724   46730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
I1014 13:51:21.916611   46730 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.919667   46730 config.go:182] Loaded profile config "functional-729396": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I1014 13:51:21.920325   46730 cli_runner.go:164] Run: docker container inspect functional-729396 --format={{.State.Status}}
I1014 13:51:21.945781   46730 ssh_runner.go:195] Run: systemctl --version
I1014 13:51:21.945839   46730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-729396
I1014 13:51:21.982827   46730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/functional-729396/id_rsa Username:docker}
I1014 13:51:22.081879   46730 build_images.go:161] Building image from path: /tmp/build.4167794381.tar
I1014 13:51:22.081960   46730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1014 13:51:22.093857   46730 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.4167794381.tar
I1014 13:51:22.098589   46730 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.4167794381.tar: stat -c "%s %y" /var/lib/minikube/build/build.4167794381.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.4167794381.tar': No such file or directory
I1014 13:51:22.098622   46730 ssh_runner.go:362] scp /tmp/build.4167794381.tar --> /var/lib/minikube/build/build.4167794381.tar (3072 bytes)
I1014 13:51:22.127995   46730 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.4167794381
I1014 13:51:22.137317   46730 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.4167794381 -xf /var/lib/minikube/build/build.4167794381.tar
I1014 13:51:22.147274   46730 containerd.go:394] Building image: /var/lib/minikube/build/build.4167794381
I1014 13:51:22.147349   46730 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4167794381 --local dockerfile=/var/lib/minikube/build/build.4167794381 --output type=image,name=localhost/my-image:functional-729396
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:492ba468902617aac6e28198ddc8dbe8e0892d70cee1a2116e75c3cc900fa91e
#8 exporting manifest sha256:492ba468902617aac6e28198ddc8dbe8e0892d70cee1a2116e75c3cc900fa91e 0.0s done
#8 exporting config sha256:f7631876849fad94139805dbf491580afb4b3a02b0efd24c647184ed9d4ac7c4 0.0s done
#8 naming to localhost/my-image:functional-729396 done
#8 DONE 0.1s
I1014 13:51:25.087593   46730 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.4167794381 --local dockerfile=/var/lib/minikube/build/build.4167794381 --output type=image,name=localhost/my-image:functional-729396: (2.940212868s)
I1014 13:51:25.087706   46730 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.4167794381
I1014 13:51:25.098114   46730 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.4167794381.tar
I1014 13:51:25.108682   46730 build_images.go:217] Built localhost/my-image:functional-729396 from /tmp/build.4167794381.tar
I1014 13:51:25.108718   46730 build_images.go:133] succeeded building to: functional-729396
I1014 13:51:25.108725   46730 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-729396
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image load --daemon kicbase/echo-server:functional-729396 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-729396 image load --daemon kicbase/echo-server:functional-729396 --alsologtostderr: (1.099356649s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image load --daemon kicbase/echo-server:functional-729396 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-729396
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image load --daemon kicbase/echo-server:functional-729396 --alsologtostderr
2024/10/14 13:51:17 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image save kicbase/echo-server:functional-729396 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image rm kicbase/echo-server:functional-729396 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-729396
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-729396 image save --daemon kicbase/echo-server:functional-729396 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-729396
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-729396
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-729396
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-729396
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (116.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614450 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1014 13:51:33.101206    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.107787    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.119169    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.140618    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.181963    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.263393    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.424730    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:33.745965    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:34.387845    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:35.670422    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:38.231756    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:43.353723    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:51:53.595239    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:52:14.076810    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:52:55.038183    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-614450 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.644902114s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (116.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (32.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-614450 -- rollout status deployment/busybox: (29.042396917s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-4h2kf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-hmrq7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-mqcvp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-4h2kf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-hmrq7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-mqcvp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-4h2kf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-hmrq7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-mqcvp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (32.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-4h2kf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-4h2kf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-hmrq7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-hmrq7 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-mqcvp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-614450 -- exec busybox-7dff88458-mqcvp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-614450 -v=7 --alsologtostderr
E1014 13:54:16.959949    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-614450 -v=7 --alsologtostderr: (21.657309295s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr: (1.027657573s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-614450 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 status --output json -v=7 --alsologtostderr: (1.054098641s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp testdata/cp-test.txt ha-614450:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3133172877/001/cp-test_ha-614450.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450:/home/docker/cp-test.txt ha-614450-m02:/home/docker/cp-test_ha-614450_ha-614450-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test_ha-614450_ha-614450-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450:/home/docker/cp-test.txt ha-614450-m03:/home/docker/cp-test_ha-614450_ha-614450-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test_ha-614450_ha-614450-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450:/home/docker/cp-test.txt ha-614450-m04:/home/docker/cp-test_ha-614450_ha-614450-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test_ha-614450_ha-614450-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp testdata/cp-test.txt ha-614450-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3133172877/001/cp-test_ha-614450-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m02:/home/docker/cp-test.txt ha-614450:/home/docker/cp-test_ha-614450-m02_ha-614450.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test_ha-614450-m02_ha-614450.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m02:/home/docker/cp-test.txt ha-614450-m03:/home/docker/cp-test_ha-614450-m02_ha-614450-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test_ha-614450-m02_ha-614450-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m02:/home/docker/cp-test.txt ha-614450-m04:/home/docker/cp-test_ha-614450-m02_ha-614450-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test_ha-614450-m02_ha-614450-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp testdata/cp-test.txt ha-614450-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3133172877/001/cp-test_ha-614450-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m03:/home/docker/cp-test.txt ha-614450:/home/docker/cp-test_ha-614450-m03_ha-614450.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test_ha-614450-m03_ha-614450.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m03:/home/docker/cp-test.txt ha-614450-m02:/home/docker/cp-test_ha-614450-m03_ha-614450-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test_ha-614450-m03_ha-614450-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m03:/home/docker/cp-test.txt ha-614450-m04:/home/docker/cp-test_ha-614450-m03_ha-614450-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test_ha-614450-m03_ha-614450-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp testdata/cp-test.txt ha-614450-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3133172877/001/cp-test_ha-614450-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m04:/home/docker/cp-test.txt ha-614450:/home/docker/cp-test_ha-614450-m04_ha-614450.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450 "sudo cat /home/docker/cp-test_ha-614450-m04_ha-614450.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m04:/home/docker/cp-test.txt ha-614450-m02:/home/docker/cp-test_ha-614450-m04_ha-614450-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m02 "sudo cat /home/docker/cp-test_ha-614450-m04_ha-614450-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 cp ha-614450-m04:/home/docker/cp-test.txt ha-614450-m03:/home/docker/cp-test_ha-614450-m04_ha-614450-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 ssh -n ha-614450-m03 "sudo cat /home/docker/cp-test_ha-614450-m04_ha-614450-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 node stop m02 -v=7 --alsologtostderr: (12.093314619s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr: exit status 7 (727.22391ms)

                                                
                                                
-- stdout --
	ha-614450
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614450-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614450-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-614450-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:54:53.385994   62978 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:54:53.386156   62978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:53.386164   62978 out.go:358] Setting ErrFile to fd 2...
	I1014 13:54:53.386170   62978 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:54:53.386442   62978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:54:53.386647   62978 out.go:352] Setting JSON to false
	I1014 13:54:53.386692   62978 mustload.go:65] Loading cluster: ha-614450
	I1014 13:54:53.386801   62978 notify.go:220] Checking for updates...
	I1014 13:54:53.387171   62978 config.go:182] Loaded profile config "ha-614450": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:54:53.387192   62978 status.go:174] checking status of ha-614450 ...
	I1014 13:54:53.387913   62978 cli_runner.go:164] Run: docker container inspect ha-614450 --format={{.State.Status}}
	I1014 13:54:53.409046   62978 status.go:371] ha-614450 host status = "Running" (err=<nil>)
	I1014 13:54:53.409117   62978 host.go:66] Checking if "ha-614450" exists ...
	I1014 13:54:53.409414   62978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614450
	I1014 13:54:53.435076   62978 host.go:66] Checking if "ha-614450" exists ...
	I1014 13:54:53.435406   62978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:54:53.435464   62978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614450
	I1014 13:54:53.454377   62978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/ha-614450/id_rsa Username:docker}
	I1014 13:54:53.546407   62978 ssh_runner.go:195] Run: systemctl --version
	I1014 13:54:53.554141   62978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:54:53.567486   62978 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 13:54:53.626043   62978 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-10-14 13:54:53.614465225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 13:54:53.626955   62978 kubeconfig.go:125] found "ha-614450" server: "https://192.168.49.254:8443"
	I1014 13:54:53.626994   62978 api_server.go:166] Checking apiserver status ...
	I1014 13:54:53.627040   62978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:54:53.639430   62978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1456/cgroup
	I1014 13:54:53.650325   62978 api_server.go:182] apiserver freezer: "12:freezer:/docker/0edd39457558ab01d29b91d3013378bf78bdbd3010a58fbcdab93d6925bc324a/kubepods/burstable/pod3e04b20152e421970af967cba4f12e52/6d0c8ad6205a92864ec4bf3c8b746e726e8eba7cc6cc750f87303e027760f148"
	I1014 13:54:53.650397   62978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0edd39457558ab01d29b91d3013378bf78bdbd3010a58fbcdab93d6925bc324a/kubepods/burstable/pod3e04b20152e421970af967cba4f12e52/6d0c8ad6205a92864ec4bf3c8b746e726e8eba7cc6cc750f87303e027760f148/freezer.state
	I1014 13:54:53.660272   62978 api_server.go:204] freezer state: "THAWED"
	I1014 13:54:53.660312   62978 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 13:54:53.670834   62978 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 13:54:53.670867   62978 status.go:463] ha-614450 apiserver status = Running (err=<nil>)
	I1014 13:54:53.670879   62978 status.go:176] ha-614450 status: &{Name:ha-614450 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:54:53.670896   62978 status.go:174] checking status of ha-614450-m02 ...
	I1014 13:54:53.671214   62978 cli_runner.go:164] Run: docker container inspect ha-614450-m02 --format={{.State.Status}}
	I1014 13:54:53.695350   62978 status.go:371] ha-614450-m02 host status = "Stopped" (err=<nil>)
	I1014 13:54:53.695370   62978 status.go:384] host is not running, skipping remaining checks
	I1014 13:54:53.695396   62978 status.go:176] ha-614450-m02 status: &{Name:ha-614450-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:54:53.695416   62978 status.go:174] checking status of ha-614450-m03 ...
	I1014 13:54:53.695831   62978 cli_runner.go:164] Run: docker container inspect ha-614450-m03 --format={{.State.Status}}
	I1014 13:54:53.714066   62978 status.go:371] ha-614450-m03 host status = "Running" (err=<nil>)
	I1014 13:54:53.714093   62978 host.go:66] Checking if "ha-614450-m03" exists ...
	I1014 13:54:53.714411   62978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614450-m03
	I1014 13:54:53.730978   62978 host.go:66] Checking if "ha-614450-m03" exists ...
	I1014 13:54:53.731355   62978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:54:53.731436   62978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614450-m03
	I1014 13:54:53.749406   62978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/ha-614450-m03/id_rsa Username:docker}
	I1014 13:54:53.842618   62978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:54:53.854842   62978 kubeconfig.go:125] found "ha-614450" server: "https://192.168.49.254:8443"
	I1014 13:54:53.854872   62978 api_server.go:166] Checking apiserver status ...
	I1014 13:54:53.854913   62978 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 13:54:53.865923   62978 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1323/cgroup
	I1014 13:54:53.875128   62978 api_server.go:182] apiserver freezer: "12:freezer:/docker/8a3b12b6b541e068c74b67e97a4fef470ba9f60b9868421b99149fe8577dc693/kubepods/burstable/pod31b86989e53ef5d8fd989e1c2df4a89f/7a29abae01e90daa4760e20e1f41610777aa78c7accdccad13472e510acbec9b"
	I1014 13:54:53.875201   62978 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8a3b12b6b541e068c74b67e97a4fef470ba9f60b9868421b99149fe8577dc693/kubepods/burstable/pod31b86989e53ef5d8fd989e1c2df4a89f/7a29abae01e90daa4760e20e1f41610777aa78c7accdccad13472e510acbec9b/freezer.state
	I1014 13:54:53.884158   62978 api_server.go:204] freezer state: "THAWED"
	I1014 13:54:53.884191   62978 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1014 13:54:53.892526   62978 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1014 13:54:53.892566   62978 status.go:463] ha-614450-m03 apiserver status = Running (err=<nil>)
	I1014 13:54:53.892576   62978 status.go:176] ha-614450-m03 status: &{Name:ha-614450-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:54:53.892594   62978 status.go:174] checking status of ha-614450-m04 ...
	I1014 13:54:53.892925   62978 cli_runner.go:164] Run: docker container inspect ha-614450-m04 --format={{.State.Status}}
	I1014 13:54:53.909947   62978 status.go:371] ha-614450-m04 host status = "Running" (err=<nil>)
	I1014 13:54:53.909977   62978 host.go:66] Checking if "ha-614450-m04" exists ...
	I1014 13:54:53.910263   62978 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-614450-m04
	I1014 13:54:53.927618   62978 host.go:66] Checking if "ha-614450-m04" exists ...
	I1014 13:54:53.928060   62978 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 13:54:53.928119   62978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-614450-m04
	I1014 13:54:53.944778   62978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/ha-614450-m04/id_rsa Username:docker}
	I1014 13:54:54.035420   62978 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 13:54:54.051767   62978 status.go:176] ha-614450-m04 status: &{Name:ha-614450-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 node start m02 -v=7 --alsologtostderr: (17.752188205s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr: (1.035830858s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.009739458s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-614450 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-614450 -v=7 --alsologtostderr
E1014 13:55:33.486937    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.493632    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.504976    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.526308    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.567679    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.649058    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:33.810500    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:34.132108    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:34.773461    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:36.054783    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:38.617448    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:55:43.739076    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-614450 -v=7 --alsologtostderr: (37.42952197s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614450 --wait=true -v=7 --alsologtostderr
E1014 13:55:53.980745    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:14.462018    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:33.081999    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:56:55.424240    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 13:57:00.802303    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-614450 --wait=true -v=7 --alsologtostderr: (1m22.669782994s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-614450
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (120.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 node delete m03 -v=7 --alsologtostderr: (9.894166333s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 stop -v=7 --alsologtostderr: (35.875084845s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr: exit status 7 (123.949958ms)

                                                
                                                
-- stdout --
	ha-614450
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614450-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-614450-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 13:58:02.441402   77217 out.go:345] Setting OutFile to fd 1 ...
	I1014 13:58:02.441537   77217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:58:02.441546   77217 out.go:358] Setting ErrFile to fd 2...
	I1014 13:58:02.441552   77217 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 13:58:02.441784   77217 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 13:58:02.441968   77217 out.go:352] Setting JSON to false
	I1014 13:58:02.442021   77217 mustload.go:65] Loading cluster: ha-614450
	I1014 13:58:02.442103   77217 notify.go:220] Checking for updates...
	I1014 13:58:02.442498   77217 config.go:182] Loaded profile config "ha-614450": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 13:58:02.442510   77217 status.go:174] checking status of ha-614450 ...
	I1014 13:58:02.443151   77217 cli_runner.go:164] Run: docker container inspect ha-614450 --format={{.State.Status}}
	I1014 13:58:02.462214   77217 status.go:371] ha-614450 host status = "Stopped" (err=<nil>)
	I1014 13:58:02.462236   77217 status.go:384] host is not running, skipping remaining checks
	I1014 13:58:02.462242   77217 status.go:176] ha-614450 status: &{Name:ha-614450 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:58:02.462280   77217 status.go:174] checking status of ha-614450-m02 ...
	I1014 13:58:02.462596   77217 cli_runner.go:164] Run: docker container inspect ha-614450-m02 --format={{.State.Status}}
	I1014 13:58:02.488072   77217 status.go:371] ha-614450-m02 host status = "Stopped" (err=<nil>)
	I1014 13:58:02.488096   77217 status.go:384] host is not running, skipping remaining checks
	I1014 13:58:02.488103   77217 status.go:176] ha-614450-m02 status: &{Name:ha-614450-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 13:58:02.488123   77217 status.go:174] checking status of ha-614450-m04 ...
	I1014 13:58:02.488448   77217 cli_runner.go:164] Run: docker container inspect ha-614450-m04 --format={{.State.Status}}
	I1014 13:58:02.506821   77217 status.go:371] ha-614450-m04 host status = "Stopped" (err=<nil>)
	I1014 13:58:02.506843   77217 status.go:384] host is not running, skipping remaining checks
	I1014 13:58:02.506850   77217 status.go:176] ha-614450-m04 status: &{Name:ha-614450-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-614450 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E1014 13:58:17.345940    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-614450 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.935558854s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-614450 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-614450 --control-plane -v=7 --alsologtostderr: (47.451107818s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-614450 status -v=7 --alsologtostderr: (1.009993717s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.11s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-412854 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E1014 14:00:33.487165    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:01:01.189207    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-412854 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (53.109108767s)
--- PASS: TestJSONOutput/start/Command (53.11s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-412854 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-412854 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-412854 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-412854 --output=json --user=testUser: (5.782299474s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-171554 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-171554 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.7117ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7dde0e50-4c0c-4aaf-99fc-40d91a557adf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-171554] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d99d4b53-8115-4849-9f29-b4b8cb2d5797","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"75581b36-fc83-4830-bad1-c275d55500c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"865f7ff3-51c0-49e6-887e-e97776fb5068","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig"}}
	{"specversion":"1.0","id":"8178028a-989d-4d2c-8691-307a4205e397","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube"}}
	{"specversion":"1.0","id":"7e3aaf04-eb15-4a82-b968-fd69b169eb5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"5248ae56-b880-4558-818e-d47d73dd39df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a500c5cc-5c70-4476-9c0f-9f7779c531e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-171554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-171554
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-155935 --network=
E1014 14:01:33.082292    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-155935 --network=: (36.462209671s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-155935" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-155935
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-155935: (2.057758925s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.55s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.1s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-247211 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-247211 --network=bridge: (30.100444263s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-247211" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-247211
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-247211: (1.972516299s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.10s)

                                                
                                    
x
+
TestKicExistingNetwork (32.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1014 14:02:35.667497    7542 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1014 14:02:35.683157    7542 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1014 14:02:35.683232    7542 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1014 14:02:35.683249    7542 cli_runner.go:164] Run: docker network inspect existing-network
W1014 14:02:35.698713    7542 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1014 14:02:35.698757    7542 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1014 14:02:35.698771    7542 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1014 14:02:35.698872    7542 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1014 14:02:35.713582    7542 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-2e4330e6bcb7 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:63:d2:f4:1e} reservation:<nil>}
I1014 14:02:35.713897    7542 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40014cf4d0}
I1014 14:02:35.713917    7542 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1014 14:02:35.713966    7542 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1014 14:02:35.785629    7542 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-652670 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-652670 --network=existing-network: (30.674126839s)
helpers_test.go:175: Cleaning up "existing-network-652670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-652670
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-652670: (2.068962963s)
I1014 14:03:08.545107    7542 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.89s)

                                                
                                    
x
+
TestKicCustomSubnet (34.39s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-198951 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-198951 --subnet=192.168.60.0/24: (32.200919479s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-198951 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-198951" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-198951
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-198951: (2.1650944s)
--- PASS: TestKicCustomSubnet (34.39s)

                                                
                                    
x
+
TestKicStaticIP (34.33s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-275242 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-275242 --static-ip=192.168.200.200: (32.014287809s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-275242 ip
helpers_test.go:175: Cleaning up "static-ip-275242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-275242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-275242: (2.113523631s)
--- PASS: TestKicStaticIP (34.33s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (72.8s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-960976 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-960976 --driver=docker  --container-runtime=containerd: (30.593395623s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-963569 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-963569 --driver=docker  --container-runtime=containerd: (36.452041545s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-960976
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-963569
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-963569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-963569
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-963569: (2.020012157s)
helpers_test.go:175: Cleaning up "first-960976" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-960976
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-960976: (2.311378751s)
--- PASS: TestMinikubeProfile (72.80s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-903003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
E1014 14:05:33.487400    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-903003 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.311300961s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-903003 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-905792 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-905792 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.881961646s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905792 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-903003 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-903003 --alsologtostderr -v=5: (1.605222195s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905792 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-905792
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-905792: (1.256231648s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.35s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-905792
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-905792: (6.348593031s)
--- PASS: TestMountStart/serial/RestartStopped (7.35s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-905792 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (76.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-400835 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1014 14:06:33.082205    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-400835 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.662543912s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (76.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (15.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-400835 -- rollout status deployment/busybox: (13.345706005s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-mhxqg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-tq726 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-mhxqg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-tq726 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-mhxqg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-tq726 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (15.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-mhxqg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-mhxqg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-tq726 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-400835 -- exec busybox-7dff88458-tq726 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-400835 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-400835 -v 3 --alsologtostderr: (15.125364291s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.83s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-400835 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp testdata/cp-test.txt multinode-400835:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3881718677/001/cp-test_multinode-400835.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835:/home/docker/cp-test.txt multinode-400835-m02:/home/docker/cp-test_multinode-400835_multinode-400835-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test_multinode-400835_multinode-400835-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835:/home/docker/cp-test.txt multinode-400835-m03:/home/docker/cp-test_multinode-400835_multinode-400835-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test_multinode-400835_multinode-400835-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp testdata/cp-test.txt multinode-400835-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3881718677/001/cp-test_multinode-400835-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m02:/home/docker/cp-test.txt multinode-400835:/home/docker/cp-test_multinode-400835-m02_multinode-400835.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test_multinode-400835-m02_multinode-400835.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m02:/home/docker/cp-test.txt multinode-400835-m03:/home/docker/cp-test_multinode-400835-m02_multinode-400835-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test_multinode-400835-m02_multinode-400835-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp testdata/cp-test.txt multinode-400835-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3881718677/001/cp-test_multinode-400835-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m03:/home/docker/cp-test.txt multinode-400835:/home/docker/cp-test_multinode-400835-m03_multinode-400835.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835 "sudo cat /home/docker/cp-test_multinode-400835-m03_multinode-400835.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 cp multinode-400835-m03:/home/docker/cp-test.txt multinode-400835-m02:/home/docker/cp-test_multinode-400835-m03_multinode-400835-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 ssh -n multinode-400835-m02 "sudo cat /home/docker/cp-test_multinode-400835-m03_multinode-400835-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-400835 node stop m03: (1.214356358s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status
E1014 14:07:56.164398    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-400835 status: exit status 7 (499.251311ms)

                                                
                                                
-- stdout --
	multinode-400835
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-400835-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-400835-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr: exit status 7 (505.15176ms)

                                                
                                                
-- stdout --
	multinode-400835
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-400835-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-400835-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:07:56.340135  130706 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:07:56.340801  130706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:07:56.340814  130706 out.go:358] Setting ErrFile to fd 2...
	I1014 14:07:56.340820  130706 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:07:56.341086  130706 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 14:07:56.341283  130706 out.go:352] Setting JSON to false
	I1014 14:07:56.341321  130706 mustload.go:65] Loading cluster: multinode-400835
	I1014 14:07:56.341890  130706 config.go:182] Loaded profile config "multinode-400835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:07:56.341917  130706 status.go:174] checking status of multinode-400835 ...
	I1014 14:07:56.342523  130706 cli_runner.go:164] Run: docker container inspect multinode-400835 --format={{.State.Status}}
	I1014 14:07:56.343023  130706 notify.go:220] Checking for updates...
	I1014 14:07:56.361907  130706 status.go:371] multinode-400835 host status = "Running" (err=<nil>)
	I1014 14:07:56.361930  130706 host.go:66] Checking if "multinode-400835" exists ...
	I1014 14:07:56.362246  130706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-400835
	I1014 14:07:56.378600  130706 host.go:66] Checking if "multinode-400835" exists ...
	I1014 14:07:56.378908  130706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:07:56.378957  130706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-400835
	I1014 14:07:56.404020  130706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/multinode-400835/id_rsa Username:docker}
	I1014 14:07:56.494289  130706 ssh_runner.go:195] Run: systemctl --version
	I1014 14:07:56.498600  130706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:07:56.510699  130706 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:07:56.569645  130706 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-10-14 14:07:56.559752237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:07:56.570244  130706 kubeconfig.go:125] found "multinode-400835" server: "https://192.168.67.2:8443"
	I1014 14:07:56.570278  130706 api_server.go:166] Checking apiserver status ...
	I1014 14:07:56.570324  130706 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1014 14:07:56.581308  130706 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	I1014 14:07:56.590901  130706 api_server.go:182] apiserver freezer: "12:freezer:/docker/45a16405fe257b5c009924d1e9a9f7dd72a6e0b54ea4e90cdd547619bcfb114e/kubepods/burstable/podcae081139785019017a6b8d37dda5063/893ca869bfa380e2a74719980323e0ad533b227cd818ceecc5fb9fccb9187273"
	I1014 14:07:56.590981  130706 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/45a16405fe257b5c009924d1e9a9f7dd72a6e0b54ea4e90cdd547619bcfb114e/kubepods/burstable/podcae081139785019017a6b8d37dda5063/893ca869bfa380e2a74719980323e0ad533b227cd818ceecc5fb9fccb9187273/freezer.state
	I1014 14:07:56.600574  130706 api_server.go:204] freezer state: "THAWED"
	I1014 14:07:56.600611  130706 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1014 14:07:56.609339  130706 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1014 14:07:56.609370  130706 status.go:463] multinode-400835 apiserver status = Running (err=<nil>)
	I1014 14:07:56.609382  130706 status.go:176] multinode-400835 status: &{Name:multinode-400835 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:07:56.609412  130706 status.go:174] checking status of multinode-400835-m02 ...
	I1014 14:07:56.609754  130706 cli_runner.go:164] Run: docker container inspect multinode-400835-m02 --format={{.State.Status}}
	I1014 14:07:56.625800  130706 status.go:371] multinode-400835-m02 host status = "Running" (err=<nil>)
	I1014 14:07:56.625847  130706 host.go:66] Checking if "multinode-400835-m02" exists ...
	I1014 14:07:56.626176  130706 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-400835-m02
	I1014 14:07:56.641900  130706 host.go:66] Checking if "multinode-400835-m02" exists ...
	I1014 14:07:56.642205  130706 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1014 14:07:56.642263  130706 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-400835-m02
	I1014 14:07:56.658921  130706 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19790-2229/.minikube/machines/multinode-400835-m02/id_rsa Username:docker}
	I1014 14:07:56.750177  130706 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1014 14:07:56.761801  130706 status.go:176] multinode-400835-m02 status: &{Name:multinode-400835-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:07:56.761836  130706 status.go:174] checking status of multinode-400835-m03 ...
	I1014 14:07:56.762157  130706 cli_runner.go:164] Run: docker container inspect multinode-400835-m03 --format={{.State.Status}}
	I1014 14:07:56.779075  130706 status.go:371] multinode-400835-m03 host status = "Stopped" (err=<nil>)
	I1014 14:07:56.779096  130706 status.go:384] host is not running, skipping remaining checks
	I1014 14:07:56.779103  130706 status.go:176] multinode-400835-m03 status: &{Name:multinode-400835-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-400835 node start m03 -v=7 --alsologtostderr: (8.871609815s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (94.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-400835
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-400835
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-400835: (24.987406206s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-400835 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-400835 --wait=true -v=8 --alsologtostderr: (1m9.15740245s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-400835
--- PASS: TestMultiNode/serial/RestartKeepsNodes (94.27s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-400835 node delete m03: (5.005614565s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-400835 stop: (23.791634161s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-400835 status: exit status 7 (100.051184ms)

                                                
                                                
-- stdout --
	multinode-400835
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-400835-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr: exit status 7 (101.121919ms)

                                                
                                                
-- stdout --
	multinode-400835
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-400835-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:10:10.397907  139152 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:10:10.398410  139152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:10:10.398446  139152 out.go:358] Setting ErrFile to fd 2...
	I1014 14:10:10.398467  139152 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:10:10.398979  139152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 14:10:10.399205  139152 out.go:352] Setting JSON to false
	I1014 14:10:10.399280  139152 mustload.go:65] Loading cluster: multinode-400835
	I1014 14:10:10.399354  139152 notify.go:220] Checking for updates...
	I1014 14:10:10.399740  139152 config.go:182] Loaded profile config "multinode-400835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:10:10.399774  139152 status.go:174] checking status of multinode-400835 ...
	I1014 14:10:10.400608  139152 cli_runner.go:164] Run: docker container inspect multinode-400835 --format={{.State.Status}}
	I1014 14:10:10.418297  139152 status.go:371] multinode-400835 host status = "Stopped" (err=<nil>)
	I1014 14:10:10.418321  139152 status.go:384] host is not running, skipping remaining checks
	I1014 14:10:10.418328  139152 status.go:176] multinode-400835 status: &{Name:multinode-400835 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1014 14:10:10.418355  139152 status.go:174] checking status of multinode-400835-m02 ...
	I1014 14:10:10.418675  139152 cli_runner.go:164] Run: docker container inspect multinode-400835-m02 --format={{.State.Status}}
	I1014 14:10:10.438709  139152 status.go:371] multinode-400835-m02 host status = "Stopped" (err=<nil>)
	I1014 14:10:10.438741  139152 status.go:384] host is not running, skipping remaining checks
	I1014 14:10:10.438754  139152 status.go:176] multinode-400835-m02 status: &{Name:multinode-400835-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-400835 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E1014 14:10:33.487404    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-400835 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (52.181621369s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-400835 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-400835
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-400835-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-400835-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.802442ms)

                                                
                                                
-- stdout --
	* [multinode-400835-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-400835-m02' is duplicated with machine name 'multinode-400835-m02' in profile 'multinode-400835'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-400835-m03 --driver=docker  --container-runtime=containerd
E1014 14:11:33.082513    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-400835-m03 --driver=docker  --container-runtime=containerd: (29.782081335s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-400835
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-400835: exit status 80 (309.457245ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-400835 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-400835-m03 already exists in multinode-400835-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-400835-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-400835-m03: (1.966367159s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.20s)

                                                
                                    
x
+
TestPreload (123.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-955675 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E1014 14:11:56.551340    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-955675 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.881655135s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-955675 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-955675 image pull gcr.io/k8s-minikube/busybox: (2.405443376s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-955675
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-955675: (12.06420527s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-955675 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-955675 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.816950932s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-955675 image list
helpers_test.go:175: Cleaning up "test-preload-955675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-955675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-955675: (2.616904424s)
--- PASS: TestPreload (123.11s)

                                                
                                    
x
+
TestScheduledStopUnix (108.13s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-787506 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-787506 --memory=2048 --driver=docker  --container-runtime=containerd: (31.49741822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-787506 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-787506 -n scheduled-stop-787506
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-787506 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1014 14:14:14.584632    7542 retry.go:31] will retry after 136.925µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.584990    7542 retry.go:31] will retry after 195.249µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.586104    7542 retry.go:31] will retry after 123.006µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.587213    7542 retry.go:31] will retry after 313.615µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.588409    7542 retry.go:31] will retry after 394.099µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.589572    7542 retry.go:31] will retry after 811.155µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.590686    7542 retry.go:31] will retry after 961.94µs: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.591780    7542 retry.go:31] will retry after 1.760923ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.593970    7542 retry.go:31] will retry after 2.89111ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.597201    7542 retry.go:31] will retry after 5.687255ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.603366    7542 retry.go:31] will retry after 6.850857ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.610604    7542 retry.go:31] will retry after 6.737034ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.617850    7542 retry.go:31] will retry after 8.518528ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.627094    7542 retry.go:31] will retry after 21.927952ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.649299    7542 retry.go:31] will retry after 33.383379ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
I1014 14:14:14.683525    7542 retry.go:31] will retry after 36.721856ms: open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/scheduled-stop-787506/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-787506 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-787506 -n scheduled-stop-787506
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-787506
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-787506 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-787506
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-787506: exit status 7 (75.966354ms)

                                                
                                                
-- stdout --
	scheduled-stop-787506
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-787506 -n scheduled-stop-787506
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-787506 -n scheduled-stop-787506: exit status 7 (73.167917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-787506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-787506
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-787506: (5.001854971s)
--- PASS: TestScheduledStopUnix (108.13s)

                                                
                                    
x
+
TestInsufficientStorage (10.46s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-210422 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E1014 14:15:33.487290    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-210422 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.981469161s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bb0735e9-7043-41c6-8753-f744382528b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-210422] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2096dfd1-07b6-49b0-abeb-6e7d12e02ccb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19790"}}
	{"specversion":"1.0","id":"98279830-0515-417c-ae58-7d80146b763d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0ed3bc10-a77b-4e71-bf58-7d304aa8a11a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig"}}
	{"specversion":"1.0","id":"1d5fb06f-bdc9-402a-81df-8272a6233fd4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube"}}
	{"specversion":"1.0","id":"8fbdc753-3569-4d7e-9b67-991fa8da6d6e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fb834d79-3d21-477e-9d67-9eff8224271d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2715c660-a4ce-46c8-bf0a-058a954e3da0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9f9a3785-2d43-4603-ac6b-f9b76ca103df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f25cfcbd-0a47-4f87-a6ed-1658d98d11c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a4436f73-06eb-4659-bac6-765da4beacb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"10d9fd5f-c433-4ade-97ee-01909aeade0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-210422\" primary control-plane node in \"insufficient-storage-210422\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d7c47ac3-ed1f-48ee-a33c-8a267bbf9918","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"c2870fb5-6f3c-4705-969f-cb765aac685d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"485311ff-0153-45d4-b6e4-28c63d03531d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-210422 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-210422 --output=json --layout=cluster: exit status 7 (281.947776ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210422","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210422","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:15:38.956930  157819 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-210422" does not appear in /home/jenkins/minikube-integration/19790-2229/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-210422 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-210422 --output=json --layout=cluster: exit status 7 (296.512723ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-210422","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-210422","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1014 14:15:39.254690  157880 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-210422" does not appear in /home/jenkins/minikube-integration/19790-2229/kubeconfig
	E1014 14:15:39.264579  157880 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/insufficient-storage-210422/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-210422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-210422
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-210422: (1.895978678s)
--- PASS: TestInsufficientStorage (10.46s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.06s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1723487694 start -p running-upgrade-919096 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E1014 14:20:33.487659    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1723487694 start -p running-upgrade-919096 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.97736885s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-919096 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1014 14:21:33.081772    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-919096 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.584935654s)
helpers_test.go:175: Cleaning up "running-upgrade-919096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-919096
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-919096: (2.790482837s)
--- PASS: TestRunningBinaryUpgrade (82.06s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.44s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.911937211s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-238030
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-238030: (4.671618743s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-238030 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-238030 status --format={{.Host}}: exit status 7 (110.921675ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m34.947000138s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-238030 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (88.151234ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-238030] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-238030
	    minikube start -p kubernetes-upgrade-238030 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2380302 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-238030 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-238030 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.061167988s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-238030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-238030
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-238030: (2.510461657s)
--- PASS: TestKubernetesUpgrade (348.44s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.19s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2829835468 start -p missing-upgrade-994780 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2829835468 start -p missing-upgrade-994780 --memory=2200 --driver=docker  --container-runtime=containerd: (1m29.582038566s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-994780
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-994780
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-994780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-994780 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.582432137s)
helpers_test.go:175: Cleaning up "missing-upgrade-994780" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-994780
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-994780: (2.019575926s)
--- PASS: TestMissingContainerUpgrade (167.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (95.855305ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-318165] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318165 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318165 --driver=docker  --container-runtime=containerd: (38.822741862s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318165 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --driver=docker  --container-runtime=containerd
E1014 14:16:33.082063    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.89960464s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-318165 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-318165 status -o json: exit status 2 (293.41711ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-318165","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-318165
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-318165: (1.882418662s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318165 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.026689678s)
--- PASS: TestNoKubernetes/serial/Start (7.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318165 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318165 "sudo systemctl is-active --quiet service kubelet": exit status 1 (357.846152ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-318165
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-318165: (1.271044817s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-318165 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-318165 --driver=docker  --container-runtime=containerd: (7.999343652s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-318165 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-318165 "sudo systemctl is-active --quiet service kubelet": exit status 1 (475.524651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (108.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1166466659 start -p stopped-upgrade-845657 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1166466659 start -p stopped-upgrade-845657 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.936881757s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1166466659 -p stopped-upgrade-845657 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1166466659 -p stopped-upgrade-845657 stop: (20.019965325s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-845657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-845657 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.179188937s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (108.14s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-845657
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-845657: (1.409786325s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (62.19s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-855671 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-855671 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m2.18679752s)
--- PASS: TestPause/serial/Start (62.19s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.16s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-855671 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-855671 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.144980964s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.16s)

                                                
                                    
x
+
TestPause/serial/Pause (1.01s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-855671 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-855671 --alsologtostderr -v=5: (1.007012975s)
--- PASS: TestPause/serial/Pause (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-855671 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-855671 --output=json --layout=cluster: exit status 2 (356.45643ms)

                                                
                                                
-- stdout --
	{"Name":"pause-855671","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-855671","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-855671 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.12s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-855671 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-855671 --alsologtostderr -v=5: (1.117959353s)
--- PASS: TestPause/serial/PauseAgain (1.12s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.53s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-855671 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-855671 --alsologtostderr -v=5: (3.52892863s)
--- PASS: TestPause/serial/DeletePaused (3.53s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-855671
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-855671: exit status 1 (23.0422ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-855671: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-017567 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-017567 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (342.836151ms)

                                                
                                                
-- stdout --
	* [false-017567] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19790
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1014 14:23:04.799168  198452 out.go:345] Setting OutFile to fd 1 ...
	I1014 14:23:04.801261  198452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:23:04.801309  198452 out.go:358] Setting ErrFile to fd 2...
	I1014 14:23:04.801330  198452 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1014 14:23:04.801625  198452 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19790-2229/.minikube/bin
	I1014 14:23:04.802106  198452 out.go:352] Setting JSON to false
	I1014 14:23:04.807273  198452 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3936,"bootTime":1728911849,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1014 14:23:04.807390  198452 start.go:139] virtualization:  
	I1014 14:23:04.812504  198452 out.go:177] * [false-017567] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1014 14:23:04.814879  198452 out.go:177]   - MINIKUBE_LOCATION=19790
	I1014 14:23:04.815040  198452 notify.go:220] Checking for updates...
	I1014 14:23:04.819540  198452 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1014 14:23:04.821491  198452 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19790-2229/kubeconfig
	I1014 14:23:04.823949  198452 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19790-2229/.minikube
	I1014 14:23:04.826356  198452 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1014 14:23:04.828702  198452 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1014 14:23:04.831423  198452 config.go:182] Loaded profile config "force-systemd-flag-418551": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I1014 14:23:04.831563  198452 driver.go:394] Setting default libvirt URI to qemu:///system
	I1014 14:23:04.902309  198452 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1014 14:23:04.902438  198452 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1014 14:23:05.025155  198452 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-10-14 14:23:05.005900056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I1014 14:23:05.025280  198452 docker.go:318] overlay module found
	I1014 14:23:05.027666  198452 out.go:177] * Using the docker driver based on user configuration
	I1014 14:23:05.030224  198452 start.go:297] selected driver: docker
	I1014 14:23:05.030248  198452 start.go:901] validating driver "docker" against <nil>
	I1014 14:23:05.030271  198452 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1014 14:23:05.032884  198452 out.go:201] 
	W1014 14:23:05.035127  198452 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1014 14:23:05.037410  198452 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-017567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-017567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-017567"

                                                
                                                
----------------------- debugLogs end: false-017567 [took: 4.099854572s] --------------------------------
helpers_test.go:175: Cleaning up "false-017567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-017567
--- PASS: TestNetworkPlugins/group/false (4.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (154.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-805757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E1014 14:24:36.166437    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:25:33.486684    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:26:33.082056    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-805757 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m34.032334849s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (154.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-805757 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc54f2be-a4b5-4bc6-ae06-ad7e0e8c2339] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc54f2be-a4b5-4bc6-ae06-ad7e0e8c2339] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.024046747s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-805757 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-683238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-683238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m11.286383027s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-805757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-805757 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.671253334s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-805757 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.66s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-805757 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-805757 --alsologtostderr -v=3: (14.661206342s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (14.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-805757 -n old-k8s-version-805757
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-805757 -n old-k8s-version-805757: exit status 7 (101.14778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-805757 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-683238 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ee35351-7bfd-4e70-892c-933b32880dde] Pending
helpers_test.go:344: "busybox" [7ee35351-7bfd-4e70-892c-933b32880dde] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7ee35351-7bfd-4e70-892c-933b32880dde] Running
E1014 14:28:36.552654    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004677607s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-683238 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-683238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-683238 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.094128942s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-683238 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-683238 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-683238 --alsologtostderr -v=3: (12.055904738s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683238 -n no-preload-683238
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683238 -n no-preload-683238: exit status 7 (76.801459ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-683238 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (303.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-683238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1014 14:30:33.487461    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:31:33.082429    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-683238 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (5m2.744197723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-683238 -n no-preload-683238
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (303.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-z2l59" [3c66ace7-ede2-40b6-aab9-18c32d6dbe35] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00484715s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-z2l59" [3c66ace7-ede2-40b6-aab9-18c32d6dbe35] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004404952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-683238 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xqjk6" [dda33226-014f-41f6-8f0f-57cf943c512f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004370938s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-683238 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-683238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683238 -n no-preload-683238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683238 -n no-preload-683238: exit status 2 (325.601027ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683238 -n no-preload-683238
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683238 -n no-preload-683238: exit status 2 (357.799846ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-683238 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-683238 -n no-preload-683238
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-683238 -n no-preload-683238
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xqjk6" [dda33226-014f-41f6-8f0f-57cf943c512f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005419354s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-805757 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-805757 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-805757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-805757 --alsologtostderr -v=1: (1.015242193s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-805757 -n old-k8s-version-805757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-805757 -n old-k8s-version-805757: exit status 2 (471.689892ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-805757 -n old-k8s-version-805757
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-805757 -n old-k8s-version-805757: exit status 2 (449.193754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-805757 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-805757 -n old-k8s-version-805757
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-805757 -n old-k8s-version-805757
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-331077 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-331077 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (58.133773018s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-797441 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-797441 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m9.260821618s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (69.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-331077 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9ed8500c-e211-4a70-816e-749ab54b6ca9] Pending
helpers_test.go:344: "busybox" [9ed8500c-e211-4a70-816e-749ab54b6ca9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9ed8500c-e211-4a70-816e-749ab54b6ca9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004617513s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-331077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-331077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-331077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012624261s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-331077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-331077 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-331077 --alsologtostderr -v=3: (12.11424073s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-797441 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [754e3ea0-2ef5-4769-b9ee-6c50197987cf] Pending
helpers_test.go:344: "busybox" [754e3ea0-2ef5-4769-b9ee-6c50197987cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [754e3ea0-2ef5-4769-b9ee-6c50197987cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003653218s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-797441 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-331077 -n embed-certs-331077
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-331077 -n embed-certs-331077: exit status 7 (98.942611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-331077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (267.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-331077 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1014 14:35:33.487288    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-331077 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.14152651s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-331077 -n embed-certs-331077
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (267.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-797441 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-797441 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.492611462s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-797441 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-797441 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-797441 --alsologtostderr -v=3: (12.57635151s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441: exit status 7 (125.72877ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-797441 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-797441 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1014 14:36:33.081797    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:09.979505    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:09.985902    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:09.997374    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:10.020720    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:10.062226    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:10.144312    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:10.305918    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:10.627659    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:11.269393    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:12.551631    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:15.113258    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:20.236364    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:30.477723    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:37:50.960061    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.648494    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.654879    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.666322    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.687697    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.729211    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.810632    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:28.972959    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:29.294684    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:29.936905    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:31.218767    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:31.922258    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:33.780969    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:38.903070    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:38:49.144882    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:39:09.626307    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:39:50.588483    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:39:53.844230    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-797441 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m50.311785352s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88vv4" [bc12cb2e-2d4c-4251-9b42-f40743bfa053] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003234996s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-88vv4" [bc12cb2e-2d4c-4251-9b42-f40743bfa053] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004364005s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-331077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-331077 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-331077 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-331077 -n embed-certs-331077
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-331077 -n embed-certs-331077: exit status 2 (340.209571ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-331077 -n embed-certs-331077
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-331077 -n embed-certs-331077: exit status 2 (348.375807ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-331077 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-331077 -n embed-certs-331077
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-331077 -n embed-certs-331077
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.47s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-130220 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E1014 14:40:33.487250    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-130220 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (34.468673035s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5d6k6" [d2242553-fa97-45d1-9efe-8ff4341b0cf0] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004533116s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5d6k6" [d2242553-fa97-45d1-9efe-8ff4341b0cf0] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003739135s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-797441 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-130220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-130220 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.420339705s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-130220 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-130220 --alsologtostderr -v=3: (1.284865038s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-130220 -n newest-cni-130220
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-130220 -n newest-cni-130220: exit status 7 (74.996698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-130220 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-130220 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-130220 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (20.704045403s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-130220 -n newest-cni-130220
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-797441 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-797441 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441: exit status 2 (403.655995ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441: exit status 2 (389.906484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-797441 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-797441 --alsologtostderr -v=1: (1.094330104s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-797441 -n default-k8s-diff-port-797441
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1014 14:41:12.510284    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m11.560481342s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-130220 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-130220 --alsologtostderr -v=1
E1014 14:41:16.168628    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-130220 -n newest-cni-130220
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-130220 -n newest-cni-130220: exit status 2 (383.229226ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-130220 -n newest-cni-130220
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-130220 -n newest-cni-130220: exit status 2 (410.290561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-130220 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-130220 -n newest-cni-130220
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-130220 -n newest-cni-130220
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.43s)
E1014 14:46:33.081835    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E1014 14:41:33.082555    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/addons-569374/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:42:09.979197    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/old-k8s-version-805757/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (57.867507322s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-017567 "pgrep -a kubelet"
I1014 14:42:17.523855    7542 config.go:182] Loaded profile config "auto-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vzcsh" [8921a543-c4e3-4758-b996-ab0a11856229] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vzcsh" [8921a543-c4e3-4758-b996-ab0a11856229] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003626174s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-r7f6g" [32d7d32a-2919-4186-89a5-636fe81fc533] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003915807s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-017567 "pgrep -a kubelet"
I1014 14:42:26.170747    7542 config.go:182] Loaded profile config "kindnet-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mfql8" [12fa452a-e854-4c5e-9674-e1704013c60e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mfql8" [12fa452a-e854-4c5e-9674-e1704013c60e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.007754104s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (77.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m17.945693742s)
--- PASS: TestNetworkPlugins/group/calico/Start (77.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1014 14:43:28.648074    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:43:56.351620    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/no-preload-683238/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (58.071099661s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-017567 "pgrep -a kubelet"
I1014 14:43:59.658356    7542 config.go:182] Loaded profile config "custom-flannel-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wwvhx" [9ba544bf-d80a-4b02-97aa-2422cfc39243] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wwvhx" [9ba544bf-d80a-4b02-97aa-2422cfc39243] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003558162s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-67zh9" [0c16d257-2b86-4e73-b541-251bdc730990] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004687015s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-017567 "pgrep -a kubelet"
I1014 14:44:14.706845    7542 config.go:182] Loaded profile config "calico-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mxf7w" [793cd05d-2169-4cf3-b707-1235eddf56d7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mxf7w" [793cd05d-2169-4cf3-b707-1235eddf56d7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005846079s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.723786421s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1014 14:45:16.554229    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.542088466s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-017567 "pgrep -a kubelet"
I1014 14:45:25.318688    7542 config.go:182] Loaded profile config "enable-default-cni-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zn89w" [d1936dcc-a565-4b46-92ec-89dcaa31776b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zn89w" [d1936dcc-a565-4b46-92ec-89dcaa31776b] Running
E1014 14:45:29.820737    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:29.827166    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:29.838538    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:29.859890    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:29.903303    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:29.985165    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:30.148544    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:30.472627    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:31.113956    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:32.396098    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:33.487220    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/functional-729396/client.crt: no such file or directory" logger="UnhandledError"
E1014 14:45:34.958359    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004127707s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-mvrf4" [2ba4bd4a-ed1c-4bb4-b4a5-48fbf52f51aa] Running
E1014 14:45:50.322120    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003844884s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-017567 "pgrep -a kubelet"
I1014 14:45:52.925754    7542 config.go:182] Loaded profile config "flannel-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6j76n" [f8625938-d6ea-4a6a-83b4-5aa1f141fdfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6j76n" [f8625938-d6ea-4a6a-83b4-5aa1f141fdfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.009377071s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (53.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-017567 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (53.289784742s)
--- PASS: TestNetworkPlugins/group/bridge/Start (53.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-017567 "pgrep -a kubelet"
I1014 14:46:50.514754    7542 config.go:182] Loaded profile config "bridge-017567": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-017567 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-thxxb" [ed7d8c4e-742d-4607-85f3-9f46ea03c6a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1014 14:46:51.765536    7542 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19790-2229/.minikube/profiles/default-k8s-diff-port-797441/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-thxxb" [ed7d8c4e-742d-4607-85f3-9f46ea03c6a1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003856559s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-017567 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-017567 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (28/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-570495 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-570495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-570495
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:968: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-595280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-595280
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-017567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-017567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-017567"

                                                
                                                
----------------------- debugLogs end: kubenet-017567 [took: 4.528442986s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-017567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-017567
--- SKIP: TestNetworkPlugins/group/kubenet (4.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-017567 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-017567" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-017567

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-017567" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-017567"

                                                
                                                
----------------------- debugLogs end: cilium-017567 [took: 5.228279801s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-017567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-017567
--- SKIP: TestNetworkPlugins/group/cilium (5.47s)

                                                
                                    
Copied to clipboard