Test Report: Docker_Linux_containerd_arm64 19696

                    
                      60137f5eb61dd17472aeb1c9d9b63bd7ae7f04e6:2024-09-24:36347
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 199.94
301 TestStartStop/group/old-k8s-version/serial/SecondStart 381.67
x
+
TestAddons/serial/Volcano (199.94s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 55.566159ms
addons_test.go:851: volcano-controller stabilized in 56.852435ms
addons_test.go:843: volcano-admission stabilized in 57.030863ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-5889h" [b3905311-3e79-4571-b253-80f9093c3672] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004258285s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-6vsl6" [b99a4810-84cc-4bbc-9353-8a810dd731aa] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003750091s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-stspk" [be6f491a-f256-49e9-a358-bc1282f52164] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003714081s
addons_test.go:870: (dbg) Run:  kubectl --context addons-321431 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-321431 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-321431 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c293216d-c61c-4b52-8c0a-dd29d3f34c35] Pending
helpers_test.go:344: "test-job-nginx-0" [c293216d-c61c-4b52-8c0a-dd29d3f34c35] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-321431 -n addons-321431
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-24 00:31:02.689775129 +0000 UTC m=+433.970932317
addons_test.go:902: (dbg) Run:  kubectl --context addons-321431 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-321431 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-fbd7e40c-2ef0-4d5d-aedf-b07b3de5b2a9
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9vrrm (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-9vrrm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-321431 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-321431 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-321431
helpers_test.go:235: (dbg) docker inspect addons-321431:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24",
	        "Created": "2024-09-24T00:24:35.892005484Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 302969,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-24T00:24:36.07151655Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24/hostname",
	        "HostsPath": "/var/lib/docker/containers/3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24/hosts",
	        "LogPath": "/var/lib/docker/containers/3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24/3792430458af08fd61a95a872702c361fedcbdbe271604bcefe7d5369f8a5b24-json.log",
	        "Name": "/addons-321431",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-321431:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-321431",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/75e31b04672d46c6b6ea477020db04dedbf42ef8c9bc6e73ca41c652c53267a1-init/diff:/var/lib/docker/overlay2/7ad1ac86d8d84caef983ee398d28a66996d884096876cd745ca39b66abf10752/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75e31b04672d46c6b6ea477020db04dedbf42ef8c9bc6e73ca41c652c53267a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75e31b04672d46c6b6ea477020db04dedbf42ef8c9bc6e73ca41c652c53267a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75e31b04672d46c6b6ea477020db04dedbf42ef8c9bc6e73ca41c652c53267a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-321431",
	                "Source": "/var/lib/docker/volumes/addons-321431/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-321431",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-321431",
	                "name.minikube.sigs.k8s.io": "addons-321431",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6d3c29e6bebb9be156188feee22de4b250d21afb3843c5f1b66a1498859ef8a9",
	            "SandboxKey": "/var/run/docker/netns/6d3c29e6bebb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-321431": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a4d6be303c6e4e468127b298c8b8b38435d7ab967532cfcec4d46faf0fd3a06d",
	                    "EndpointID": "237a1d837acd8b7c0e1291535610f04224a358a1d15599ac00bd2d6953207710",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-321431",
	                        "3792430458af"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-321431 -n addons-321431
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 logs -n 25: (1.577497794s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-142004   | jenkins | v1.34.0 | 24 Sep 24 00:23 UTC |                     |
	|         | -p download-only-142004              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| delete  | -p download-only-142004              | download-only-142004   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| start   | -o=json --download-only              | download-only-713417   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | -p download-only-713417              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| delete  | -p download-only-713417              | download-only-713417   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| delete  | -p download-only-142004              | download-only-142004   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| delete  | -p download-only-713417              | download-only-713417   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| start   | --download-only -p                   | download-docker-765331 | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | download-docker-765331               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-765331            | download-docker-765331 | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| start   | --download-only -p                   | binary-mirror-584606   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | binary-mirror-584606                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:42843               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-584606              | binary-mirror-584606   | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| addons  | enable dashboard -p                  | addons-321431          | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | addons-321431                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-321431          | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | addons-321431                        |                        |         |         |                     |                     |
	| start   | -p addons-321431 --wait=true         | addons-321431          | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:27 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:24:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:24:11.693708  302471 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:24:11.693944  302471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:24:11.693971  302471 out.go:358] Setting ErrFile to fd 2...
	I0924 00:24:11.694000  302471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:24:11.694384  302471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:24:11.695022  302471 out.go:352] Setting JSON to false
	I0924 00:24:11.696024  302471 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7597,"bootTime":1727129855,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 00:24:11.696129  302471 start.go:139] virtualization:  
	I0924 00:24:11.698582  302471 out.go:177] * [addons-321431] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 00:24:11.700885  302471 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:24:11.701054  302471 notify.go:220] Checking for updates...
	I0924 00:24:11.704761  302471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:24:11.706484  302471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:24:11.708571  302471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 00:24:11.710532  302471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 00:24:11.712589  302471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:24:11.714752  302471 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:24:11.736643  302471 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 00:24:11.736796  302471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:24:11.804304  302471 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 00:24:11.794322511 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:24:11.804436  302471 docker.go:318] overlay module found
	I0924 00:24:11.806609  302471 out.go:177] * Using the docker driver based on user configuration
	I0924 00:24:11.808471  302471 start.go:297] selected driver: docker
	I0924 00:24:11.808487  302471 start.go:901] validating driver "docker" against <nil>
	I0924 00:24:11.808501  302471 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:24:11.809218  302471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:24:11.859266  302471 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 00:24:11.849219827 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:24:11.859492  302471 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:24:11.859726  302471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:24:11.861677  302471 out.go:177] * Using Docker driver with root privileges
	I0924 00:24:11.863394  302471 cni.go:84] Creating CNI manager for ""
	I0924 00:24:11.863466  302471 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 00:24:11.863480  302471 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 00:24:11.863562  302471 start.go:340] cluster config:
	{Name:addons-321431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-321431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:24:11.865553  302471 out.go:177] * Starting "addons-321431" primary control-plane node in "addons-321431" cluster
	I0924 00:24:11.867252  302471 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 00:24:11.868984  302471 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0924 00:24:11.870935  302471 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 00:24:11.871006  302471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0924 00:24:11.871022  302471 cache.go:56] Caching tarball of preloaded images
	I0924 00:24:11.871087  302471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 00:24:11.871308  302471 preload.go:172] Found /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 00:24:11.871329  302471 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0924 00:24:11.871687  302471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/config.json ...
	I0924 00:24:11.871717  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/config.json: {Name:mk426a9fadfea74b71e9f761c7cb0ddb0d736f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:11.887172  302471 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 00:24:11.887294  302471 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 00:24:11.887321  302471 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 00:24:11.887327  302471 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 00:24:11.887345  302471 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 00:24:11.887351  302471 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0924 00:24:29.157066  302471 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0924 00:24:29.157109  302471 cache.go:194] Successfully downloaded all kic artifacts
	I0924 00:24:29.157140  302471 start.go:360] acquireMachinesLock for addons-321431: {Name:mka8ae2a08d5ab1a25bcad45d0f570b8f949d66b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 00:24:29.157637  302471 start.go:364] duration metric: took 468.526µs to acquireMachinesLock for "addons-321431"
	I0924 00:24:29.157676  302471 start.go:93] Provisioning new machine with config: &{Name:addons-321431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-321431 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 00:24:29.157765  302471 start.go:125] createHost starting for "" (driver="docker")
	I0924 00:24:29.160886  302471 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0924 00:24:29.161144  302471 start.go:159] libmachine.API.Create for "addons-321431" (driver="docker")
	I0924 00:24:29.161183  302471 client.go:168] LocalClient.Create starting
	I0924 00:24:29.161323  302471 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem
	I0924 00:24:29.542712  302471 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem
	I0924 00:24:29.716988  302471 cli_runner.go:164] Run: docker network inspect addons-321431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0924 00:24:29.732494  302471 cli_runner.go:211] docker network inspect addons-321431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0924 00:24:29.732574  302471 network_create.go:284] running [docker network inspect addons-321431] to gather additional debugging logs...
	I0924 00:24:29.732597  302471 cli_runner.go:164] Run: docker network inspect addons-321431
	W0924 00:24:29.748084  302471 cli_runner.go:211] docker network inspect addons-321431 returned with exit code 1
	I0924 00:24:29.748117  302471 network_create.go:287] error running [docker network inspect addons-321431]: docker network inspect addons-321431: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-321431 not found
	I0924 00:24:29.748132  302471 network_create.go:289] output of [docker network inspect addons-321431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-321431 not found
	
	** /stderr **
	I0924 00:24:29.748234  302471 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 00:24:29.764306  302471 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001b32310}
	I0924 00:24:29.764359  302471 network_create.go:124] attempt to create docker network addons-321431 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0924 00:24:29.764424  302471 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-321431 addons-321431
	I0924 00:24:29.832779  302471 network_create.go:108] docker network addons-321431 192.168.49.0/24 created
	I0924 00:24:29.832813  302471 kic.go:121] calculated static IP "192.168.49.2" for the "addons-321431" container
	I0924 00:24:29.832886  302471 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0924 00:24:29.846167  302471 cli_runner.go:164] Run: docker volume create addons-321431 --label name.minikube.sigs.k8s.io=addons-321431 --label created_by.minikube.sigs.k8s.io=true
	I0924 00:24:29.865214  302471 oci.go:103] Successfully created a docker volume addons-321431
	I0924 00:24:29.865321  302471 cli_runner.go:164] Run: docker run --rm --name addons-321431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-321431 --entrypoint /usr/bin/test -v addons-321431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0924 00:24:31.817538  302471 cli_runner.go:217] Completed: docker run --rm --name addons-321431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-321431 --entrypoint /usr/bin/test -v addons-321431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (1.952161348s)
	I0924 00:24:31.817572  302471 oci.go:107] Successfully prepared a docker volume addons-321431
	I0924 00:24:31.817600  302471 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 00:24:31.817628  302471 kic.go:194] Starting extracting preloaded images to volume ...
	I0924 00:24:31.817697  302471 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-321431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0924 00:24:35.824495  302471 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-321431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (4.006749939s)
	I0924 00:24:35.824528  302471 kic.go:203] duration metric: took 4.006896342s to extract preloaded images to volume ...
	W0924 00:24:35.824671  302471 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0924 00:24:35.824785  302471 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0924 00:24:35.876337  302471 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-321431 --name addons-321431 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-321431 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-321431 --network addons-321431 --ip 192.168.49.2 --volume addons-321431:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0924 00:24:36.237561  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Running}}
	I0924 00:24:36.262660  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:24:36.286965  302471 cli_runner.go:164] Run: docker exec addons-321431 stat /var/lib/dpkg/alternatives/iptables
	I0924 00:24:36.351243  302471 oci.go:144] the created container "addons-321431" has a running status.
	I0924 00:24:36.351272  302471 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa...
	I0924 00:24:37.143734  302471 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0924 00:24:37.167901  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:24:37.187909  302471 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0924 00:24:37.187934  302471 kic_runner.go:114] Args: [docker exec --privileged addons-321431 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0924 00:24:37.256575  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:24:37.275545  302471 machine.go:93] provisionDockerMachine start ...
	I0924 00:24:37.275647  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:37.295514  302471 main.go:141] libmachine: Using SSH client type: native
	I0924 00:24:37.295787  302471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0924 00:24:37.295797  302471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 00:24:37.431550  302471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-321431
	
	I0924 00:24:37.431581  302471 ubuntu.go:169] provisioning hostname "addons-321431"
	I0924 00:24:37.431700  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:37.453057  302471 main.go:141] libmachine: Using SSH client type: native
	I0924 00:24:37.453314  302471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0924 00:24:37.453332  302471 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-321431 && echo "addons-321431" | sudo tee /etc/hostname
	I0924 00:24:37.596331  302471 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-321431
	
	I0924 00:24:37.596416  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:37.615803  302471 main.go:141] libmachine: Using SSH client type: native
	I0924 00:24:37.616047  302471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I0924 00:24:37.616069  302471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-321431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-321431/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-321431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 00:24:37.747000  302471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 00:24:37.747025  302471 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19696-296322/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-296322/.minikube}
	I0924 00:24:37.747063  302471 ubuntu.go:177] setting up certificates
	I0924 00:24:37.747073  302471 provision.go:84] configureAuth start
	I0924 00:24:37.747140  302471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-321431
	I0924 00:24:37.763815  302471 provision.go:143] copyHostCerts
	I0924 00:24:37.763907  302471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem (1078 bytes)
	I0924 00:24:37.764036  302471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem (1123 bytes)
	I0924 00:24:37.764106  302471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem (1675 bytes)
	I0924 00:24:37.764161  302471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem org=jenkins.addons-321431 san=[127.0.0.1 192.168.49.2 addons-321431 localhost minikube]
	I0924 00:24:38.261266  302471 provision.go:177] copyRemoteCerts
	I0924 00:24:38.261347  302471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 00:24:38.261429  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:38.279457  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:24:38.371450  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 00:24:38.395707  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0924 00:24:38.421567  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 00:24:38.446623  302471 provision.go:87] duration metric: took 699.535986ms to configureAuth
	I0924 00:24:38.446649  302471 ubuntu.go:193] setting minikube options for container-runtime
	I0924 00:24:38.446848  302471 config.go:182] Loaded profile config "addons-321431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:24:38.446864  302471 machine.go:96] duration metric: took 1.171294449s to provisionDockerMachine
	I0924 00:24:38.446871  302471 client.go:171] duration metric: took 9.285677741s to LocalClient.Create
	I0924 00:24:38.446885  302471 start.go:167] duration metric: took 9.2857422s to libmachine.API.Create "addons-321431"
	I0924 00:24:38.446896  302471 start.go:293] postStartSetup for "addons-321431" (driver="docker")
	I0924 00:24:38.446923  302471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 00:24:38.446987  302471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 00:24:38.447029  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:38.464146  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:24:38.556251  302471 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 00:24:38.559538  302471 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 00:24:38.559575  302471 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 00:24:38.559591  302471 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 00:24:38.559599  302471 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0924 00:24:38.559609  302471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/addons for local assets ...
	I0924 00:24:38.559677  302471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/files for local assets ...
	I0924 00:24:38.559700  302471 start.go:296] duration metric: took 112.797254ms for postStartSetup
	I0924 00:24:38.560013  302471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-321431
	I0924 00:24:38.576157  302471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/config.json ...
	I0924 00:24:38.576468  302471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:24:38.576516  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:38.593938  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:24:38.683555  302471 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0924 00:24:38.688000  302471 start.go:128] duration metric: took 9.530217006s to createHost
	I0924 00:24:38.688024  302471 start.go:83] releasing machines lock for "addons-321431", held for 9.530369579s
	I0924 00:24:38.688096  302471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-321431
	I0924 00:24:38.703953  302471 ssh_runner.go:195] Run: cat /version.json
	I0924 00:24:38.704004  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:38.704033  302471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 00:24:38.704106  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:24:38.722884  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:24:38.725000  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:24:38.814387  302471 ssh_runner.go:195] Run: systemctl --version
	I0924 00:24:38.944135  302471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 00:24:38.948736  302471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0924 00:24:38.974765  302471 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0924 00:24:38.974876  302471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 00:24:39.009733  302471 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0924 00:24:39.009771  302471 start.go:495] detecting cgroup driver to use...
	I0924 00:24:39.009826  302471 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 00:24:39.009906  302471 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 00:24:39.024446  302471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 00:24:39.037348  302471 docker.go:217] disabling cri-docker service (if available) ...
	I0924 00:24:39.037476  302471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 00:24:39.052664  302471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 00:24:39.068411  302471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 00:24:39.170192  302471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 00:24:39.256760  302471 docker.go:233] disabling docker service ...
	I0924 00:24:39.256873  302471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 00:24:39.279112  302471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 00:24:39.291736  302471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 00:24:39.385186  302471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 00:24:39.475632  302471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 00:24:39.487857  302471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 00:24:39.504888  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 00:24:39.515484  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 00:24:39.526322  302471 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 00:24:39.526390  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 00:24:39.536806  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 00:24:39.547920  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 00:24:39.558542  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 00:24:39.568410  302471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 00:24:39.577474  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 00:24:39.587105  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 00:24:39.597042  302471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 00:24:39.606682  302471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 00:24:39.615268  302471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 00:24:39.623865  302471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:24:39.709959  302471 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 00:24:39.836682  302471 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0924 00:24:39.836795  302471 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0924 00:24:39.841401  302471 start.go:563] Will wait 60s for crictl version
	I0924 00:24:39.841489  302471 ssh_runner.go:195] Run: which crictl
	I0924 00:24:39.845219  302471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 00:24:39.887205  302471 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0924 00:24:39.887321  302471 ssh_runner.go:195] Run: containerd --version
	I0924 00:24:39.909483  302471 ssh_runner.go:195] Run: containerd --version
	I0924 00:24:39.933147  302471 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0924 00:24:39.935508  302471 cli_runner.go:164] Run: docker network inspect addons-321431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 00:24:39.951774  302471 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0924 00:24:39.955572  302471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:24:39.966859  302471 kubeadm.go:883] updating cluster {Name:addons-321431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-321431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 00:24:39.967036  302471 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 00:24:39.967114  302471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:24:40.029522  302471 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 00:24:40.029550  302471 containerd.go:534] Images already preloaded, skipping extraction
	I0924 00:24:40.029621  302471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 00:24:40.073487  302471 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 00:24:40.073517  302471 cache_images.go:84] Images are preloaded, skipping loading
	I0924 00:24:40.073527  302471 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0924 00:24:40.073631  302471 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-321431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-321431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 00:24:40.073710  302471 ssh_runner.go:195] Run: sudo crictl info
	I0924 00:24:40.113569  302471 cni.go:84] Creating CNI manager for ""
	I0924 00:24:40.113601  302471 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 00:24:40.113613  302471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 00:24:40.113639  302471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-321431 NodeName:addons-321431 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 00:24:40.113814  302471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-321431"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 00:24:40.113893  302471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 00:24:40.125231  302471 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 00:24:40.125312  302471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 00:24:40.135428  302471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0924 00:24:40.156421  302471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 00:24:40.176900  302471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0924 00:24:40.197352  302471 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0924 00:24:40.201293  302471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 00:24:40.214039  302471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:24:40.296653  302471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:24:40.312846  302471 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431 for IP: 192.168.49.2
	I0924 00:24:40.312871  302471 certs.go:194] generating shared ca certs ...
	I0924 00:24:40.312887  302471 certs.go:226] acquiring lock for ca certs: {Name:mk4a6ab65221805436b06c42ec4fde316fe470ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:40.313442  302471 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key
	I0924 00:24:40.642066  302471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt ...
	I0924 00:24:40.642101  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt: {Name:mke024ebffb4bcdbcd1a8e5f9d4884920cde9c7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:40.642804  302471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key ...
	I0924 00:24:40.642831  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key: {Name:mk27550eb7b4bb65e19837bf97c0ae89c450614a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:40.643365  302471 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key
	I0924 00:24:41.269589  302471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.crt ...
	I0924 00:24:41.269623  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.crt: {Name:mk55bced99475a1ac2979cfaf93f62a0d69064cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.269818  302471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key ...
	I0924 00:24:41.269831  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key: {Name:mk8ceb1ebfe66f69a1b650355c4af6dac8726f39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.270485  302471 certs.go:256] generating profile certs ...
	I0924 00:24:41.270586  302471 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.key
	I0924 00:24:41.270640  302471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt with IP's: []
	I0924 00:24:41.571966  302471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt ...
	I0924 00:24:41.572000  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: {Name:mk41a499d400d0dc3a082a10ba82324af5cb195e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.572704  302471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.key ...
	I0924 00:24:41.572725  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.key: {Name:mk8c26d74e5d15e3afc8d557786fa620c550fc85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.573347  302471 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key.84edc160
	I0924 00:24:41.573375  302471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt.84edc160 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0924 00:24:41.979414  302471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt.84edc160 ...
	I0924 00:24:41.979446  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt.84edc160: {Name:mk34b33a68d0b81990b27961c418f92191e7f3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.980081  302471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key.84edc160 ...
	I0924 00:24:41.980100  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key.84edc160: {Name:mkc3442b3bdd0660b49a46554db2a14fad4a361c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:41.980613  302471 certs.go:381] copying /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt.84edc160 -> /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt
	I0924 00:24:41.980698  302471 certs.go:385] copying /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key.84edc160 -> /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key
	I0924 00:24:41.980756  302471 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.key
	I0924 00:24:41.980777  302471 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.crt with IP's: []
	I0924 00:24:42.130993  302471 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.crt ...
	I0924 00:24:42.131027  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.crt: {Name:mk3aaa3e5a4e04cf900770590f021ceebc76f8e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:42.131773  302471 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.key ...
	I0924 00:24:42.131805  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.key: {Name:mk6006428698ca77b3914c5799834da0847376a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:24:42.132772  302471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 00:24:42.132834  302471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem (1078 bytes)
	I0924 00:24:42.132885  302471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem (1123 bytes)
	I0924 00:24:42.132921  302471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem (1675 bytes)
	I0924 00:24:42.133613  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 00:24:42.174208  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 00:24:42.205775  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 00:24:42.236423  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 00:24:42.266404  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0924 00:24:42.294585  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 00:24:42.323369  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 00:24:42.350338  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 00:24:42.377066  302471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 00:24:42.403539  302471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 00:24:42.422697  302471 ssh_runner.go:195] Run: openssl version
	I0924 00:24:42.428331  302471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 00:24:42.438314  302471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:24:42.442091  302471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:24:42.442205  302471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 00:24:42.449347  302471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 00:24:42.459325  302471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 00:24:42.462667  302471 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0924 00:24:42.462725  302471 kubeadm.go:392] StartCluster: {Name:addons-321431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-321431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:24:42.462811  302471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0924 00:24:42.462874  302471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 00:24:42.501010  302471 cri.go:89] found id: ""
	I0924 00:24:42.501083  302471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 00:24:42.510313  302471 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0924 00:24:42.519611  302471 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0924 00:24:42.519679  302471 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0924 00:24:42.531070  302471 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0924 00:24:42.531088  302471 kubeadm.go:157] found existing configuration files:
	
	I0924 00:24:42.531153  302471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0924 00:24:42.540030  302471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0924 00:24:42.540118  302471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0924 00:24:42.548711  302471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0924 00:24:42.557398  302471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0924 00:24:42.557511  302471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0924 00:24:42.566426  302471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0924 00:24:42.575338  302471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0924 00:24:42.575408  302471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0924 00:24:42.583925  302471 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0924 00:24:42.593013  302471 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0924 00:24:42.593083  302471 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0924 00:24:42.601518  302471 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0924 00:24:42.651030  302471 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0924 00:24:42.651354  302471 kubeadm.go:310] [preflight] Running pre-flight checks
	I0924 00:24:42.681053  302471 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0924 00:24:42.681133  302471 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0924 00:24:42.681178  302471 kubeadm.go:310] OS: Linux
	I0924 00:24:42.681231  302471 kubeadm.go:310] CGROUPS_CPU: enabled
	I0924 00:24:42.681283  302471 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0924 00:24:42.681334  302471 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0924 00:24:42.681386  302471 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0924 00:24:42.681437  302471 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0924 00:24:42.681488  302471 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0924 00:24:42.681539  302471 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0924 00:24:42.681590  302471 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0924 00:24:42.681640  302471 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0924 00:24:42.749723  302471 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0924 00:24:42.749839  302471 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0924 00:24:42.749944  302471 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0924 00:24:42.759405  302471 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0924 00:24:42.762479  302471 out.go:235]   - Generating certificates and keys ...
	I0924 00:24:42.762668  302471 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0924 00:24:42.762785  302471 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0924 00:24:43.079299  302471 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0924 00:24:43.277416  302471 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0924 00:24:43.509219  302471 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0924 00:24:43.653411  302471 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0924 00:24:44.119943  302471 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0924 00:24:44.120272  302471 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-321431 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 00:24:45.288749  302471 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0924 00:24:45.288886  302471 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-321431 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0924 00:24:45.801340  302471 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0924 00:24:46.100231  302471 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0924 00:24:46.614137  302471 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0924 00:24:46.614433  302471 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0924 00:24:47.053320  302471 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0924 00:24:47.528346  302471 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0924 00:24:47.836453  302471 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0924 00:24:48.129603  302471 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0924 00:24:48.443027  302471 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0924 00:24:48.444577  302471 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0924 00:24:48.450250  302471 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0924 00:24:48.452412  302471 out.go:235]   - Booting up control plane ...
	I0924 00:24:48.452545  302471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0924 00:24:48.452641  302471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0924 00:24:48.453433  302471 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0924 00:24:48.467753  302471 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0924 00:24:48.475461  302471 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0924 00:24:48.475709  302471 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0924 00:24:48.575347  302471 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0924 00:24:48.575473  302471 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0924 00:24:49.577560  302471 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00646899s
	I0924 00:24:49.577649  302471 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0924 00:24:56.079530  302471 kubeadm.go:310] [api-check] The API server is healthy after 6.502241555s
	I0924 00:24:56.101853  302471 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0924 00:24:56.117396  302471 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0924 00:24:56.143576  302471 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0924 00:24:56.143772  302471 kubeadm.go:310] [mark-control-plane] Marking the node addons-321431 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0924 00:24:56.154958  302471 kubeadm.go:310] [bootstrap-token] Using token: ga5us3.i4719mz67a186lsv
	I0924 00:24:56.157034  302471 out.go:235]   - Configuring RBAC rules ...
	I0924 00:24:56.157172  302471 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0924 00:24:56.161968  302471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0924 00:24:56.172169  302471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0924 00:24:56.176367  302471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0924 00:24:56.180622  302471 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0924 00:24:56.184576  302471 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0924 00:24:56.486630  302471 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0924 00:24:56.912701  302471 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0924 00:24:57.486271  302471 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0924 00:24:57.487511  302471 kubeadm.go:310] 
	I0924 00:24:57.487587  302471 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0924 00:24:57.487598  302471 kubeadm.go:310] 
	I0924 00:24:57.487676  302471 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0924 00:24:57.487691  302471 kubeadm.go:310] 
	I0924 00:24:57.487718  302471 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0924 00:24:57.487783  302471 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0924 00:24:57.487838  302471 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0924 00:24:57.487846  302471 kubeadm.go:310] 
	I0924 00:24:57.487901  302471 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0924 00:24:57.487909  302471 kubeadm.go:310] 
	I0924 00:24:57.487958  302471 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0924 00:24:57.487966  302471 kubeadm.go:310] 
	I0924 00:24:57.488018  302471 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0924 00:24:57.488097  302471 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0924 00:24:57.488177  302471 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0924 00:24:57.488186  302471 kubeadm.go:310] 
	I0924 00:24:57.488293  302471 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0924 00:24:57.488374  302471 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0924 00:24:57.488383  302471 kubeadm.go:310] 
	I0924 00:24:57.488467  302471 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ga5us3.i4719mz67a186lsv \
	I0924 00:24:57.488576  302471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9d2b593c8b7b2bd32f88a4bc30c3e9b006b2d5c6e312013a902df40b63f49fc9 \
	I0924 00:24:57.488602  302471 kubeadm.go:310] 	--control-plane 
	I0924 00:24:57.488612  302471 kubeadm.go:310] 
	I0924 00:24:57.488697  302471 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0924 00:24:57.488705  302471 kubeadm.go:310] 
	I0924 00:24:57.488787  302471 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ga5us3.i4719mz67a186lsv \
	I0924 00:24:57.488894  302471 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:9d2b593c8b7b2bd32f88a4bc30c3e9b006b2d5c6e312013a902df40b63f49fc9 
	I0924 00:24:57.492885  302471 kubeadm.go:310] W0924 00:24:42.647600    1013 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:24:57.493189  302471 kubeadm.go:310] W0924 00:24:42.648609    1013 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0924 00:24:57.493405  302471 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0924 00:24:57.493513  302471 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0924 00:24:57.493535  302471 cni.go:84] Creating CNI manager for ""
	I0924 00:24:57.493546  302471 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 00:24:57.496074  302471 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0924 00:24:57.497929  302471 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0924 00:24:57.501761  302471 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0924 00:24:57.501783  302471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0924 00:24:57.520366  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0924 00:24:57.820787  302471 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0924 00:24:57.820916  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:24:57.820985  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-321431 minikube.k8s.io/updated_at=2024_09_24T00_24_57_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c minikube.k8s.io/name=addons-321431 minikube.k8s.io/primary=true
	I0924 00:24:58.025098  302471 ops.go:34] apiserver oom_adj: -16
	I0924 00:24:58.025224  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:24:58.525328  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:24:59.025640  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:24:59.526063  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:25:00.028549  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:25:00.525634  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:25:01.025328  302471 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0924 00:25:01.193926  302471 kubeadm.go:1113] duration metric: took 3.373055518s to wait for elevateKubeSystemPrivileges
	I0924 00:25:01.193958  302471 kubeadm.go:394] duration metric: took 18.731234106s to StartCluster
	I0924 00:25:01.193976  302471 settings.go:142] acquiring lock: {Name:mk1b01c5281da0b61714a1aa76e5632af5b39da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:25:01.194766  302471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:25:01.195221  302471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/kubeconfig: {Name:mk12cf5f8c4244466c827b22ce4fe2341553290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:25:01.196127  302471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0924 00:25:01.196158  302471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 00:25:01.196435  302471 config.go:182] Loaded profile config "addons-321431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:25:01.196478  302471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0924 00:25:01.196571  302471 addons.go:69] Setting yakd=true in profile "addons-321431"
	I0924 00:25:01.196590  302471 addons.go:234] Setting addon yakd=true in "addons-321431"
	I0924 00:25:01.196615  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.197170  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.197794  302471 addons.go:69] Setting metrics-server=true in profile "addons-321431"
	I0924 00:25:01.197820  302471 addons.go:234] Setting addon metrics-server=true in "addons-321431"
	I0924 00:25:01.197855  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.197929  302471 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-321431"
	I0924 00:25:01.197959  302471 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-321431"
	I0924 00:25:01.197985  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.198319  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.198567  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.202028  302471 addons.go:69] Setting registry=true in profile "addons-321431"
	I0924 00:25:01.202114  302471 addons.go:234] Setting addon registry=true in "addons-321431"
	I0924 00:25:01.202197  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.202719  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.203194  302471 addons.go:69] Setting cloud-spanner=true in profile "addons-321431"
	I0924 00:25:01.203229  302471 addons.go:234] Setting addon cloud-spanner=true in "addons-321431"
	I0924 00:25:01.203267  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.203736  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.214147  302471 addons.go:69] Setting storage-provisioner=true in profile "addons-321431"
	I0924 00:25:01.214242  302471 addons.go:234] Setting addon storage-provisioner=true in "addons-321431"
	I0924 00:25:01.214318  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.215025  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.216559  302471 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-321431"
	I0924 00:25:01.217241  302471 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-321431"
	I0924 00:25:01.217291  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.217775  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.218349  302471 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-321431"
	I0924 00:25:01.218374  302471 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-321431"
	I0924 00:25:01.218687  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.233404  302471 addons.go:69] Setting default-storageclass=true in profile "addons-321431"
	I0924 00:25:01.233449  302471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-321431"
	I0924 00:25:01.233844  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.245377  302471 addons.go:69] Setting gcp-auth=true in profile "addons-321431"
	I0924 00:25:01.245433  302471 mustload.go:65] Loading cluster: addons-321431
	I0924 00:25:01.245645  302471 config.go:182] Loaded profile config "addons-321431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:25:01.245745  302471 addons.go:69] Setting volcano=true in profile "addons-321431"
	I0924 00:25:01.245771  302471 addons.go:234] Setting addon volcano=true in "addons-321431"
	I0924 00:25:01.245805  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.245915  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.246225  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.257462  302471 addons.go:69] Setting volumesnapshots=true in profile "addons-321431"
	I0924 00:25:01.257500  302471 addons.go:234] Setting addon volumesnapshots=true in "addons-321431"
	I0924 00:25:01.257542  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.258042  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.258201  302471 addons.go:69] Setting ingress=true in profile "addons-321431"
	I0924 00:25:01.258225  302471 addons.go:234] Setting addon ingress=true in "addons-321431"
	I0924 00:25:01.258261  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.258715  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.273782  302471 out.go:177] * Verifying Kubernetes components...
	I0924 00:25:01.278282  302471 addons.go:69] Setting ingress-dns=true in profile "addons-321431"
	I0924 00:25:01.278311  302471 addons.go:234] Setting addon ingress-dns=true in "addons-321431"
	I0924 00:25:01.278364  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.278867  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.295880  302471 addons.go:69] Setting inspektor-gadget=true in profile "addons-321431"
	I0924 00:25:01.295912  302471 addons.go:234] Setting addon inspektor-gadget=true in "addons-321431"
	I0924 00:25:01.295951  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.296456  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.320218  302471 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0924 00:25:01.345844  302471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 00:25:01.347106  302471 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0924 00:25:01.338548  302471 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0924 00:25:01.362278  302471 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 00:25:01.362310  302471 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 00:25:01.362404  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.362926  302471 addons.go:234] Setting addon default-storageclass=true in "addons-321431"
	I0924 00:25:01.363005  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.373837  302471 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 00:25:01.373889  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0924 00:25:01.373984  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.374462  302471 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0924 00:25:01.374477  302471 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0924 00:25:01.374531  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.396879  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.406454  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.417109  302471 out.go:177]   - Using image docker.io/registry:2.8.3
	I0924 00:25:01.418741  302471 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0924 00:25:01.420912  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0924 00:25:01.423335  302471 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0924 00:25:01.423359  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0924 00:25:01.423430  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.432653  302471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 00:25:01.437670  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0924 00:25:01.437707  302471 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0924 00:25:01.437812  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.438118  302471 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0924 00:25:01.461353  302471 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0924 00:25:01.461414  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0924 00:25:01.461500  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.461748  302471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:25:01.461778  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 00:25:01.461845  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.491004  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0924 00:25:01.492884  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0924 00:25:01.495957  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0924 00:25:01.497135  302471 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-321431"
	I0924 00:25:01.497175  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:01.497589  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:01.506410  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0924 00:25:01.515988  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0924 00:25:01.521882  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0924 00:25:01.528074  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0924 00:25:01.535109  302471 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0924 00:25:01.536684  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0924 00:25:01.536710  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0924 00:25:01.536783  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.543247  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.563065  302471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0924 00:25:01.589775  302471 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0924 00:25:01.594963  302471 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0924 00:25:01.594989  302471 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0924 00:25:01.595105  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.605820  302471 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0924 00:25:01.611635  302471 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0924 00:25:01.613519  302471 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0924 00:25:01.624105  302471 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 00:25:01.624133  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0924 00:25:01.624202  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.628096  302471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 00:25:01.632440  302471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 00:25:01.635119  302471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0924 00:25:01.640219  302471 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0924 00:25:01.641460  302471 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 00:25:01.641490  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0924 00:25:01.641575  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.642296  302471 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 00:25:01.642317  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0924 00:25:01.642379  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.688750  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.689589  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.690304  302471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 00:25:01.690323  302471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 00:25:01.690381  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.695454  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.696263  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.698306  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.719880  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.768482  302471 out.go:177]   - Using image docker.io/busybox:stable
	I0924 00:25:01.770449  302471 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0924 00:25:01.774193  302471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 00:25:01.774220  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0924 00:25:01.774289  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:01.775290  302471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 00:25:01.783145  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.829800  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.841628  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.846564  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.858176  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:01.871030  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	W0924 00:25:01.872151  302471 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0924 00:25:01.872183  302471 retry.go:31] will retry after 349.976497ms: ssh: handshake failed: EOF
	I0924 00:25:01.883065  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:02.373655  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0924 00:25:02.377294  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0924 00:25:02.536422  302471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0924 00:25:02.536493  302471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0924 00:25:02.539436  302471 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0924 00:25:02.539500  302471 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0924 00:25:02.545884  302471 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0924 00:25:02.545959  302471 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0924 00:25:02.649170  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 00:25:02.663152  302471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 00:25:02.663179  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0924 00:25:02.722359  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0924 00:25:02.732177  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0924 00:25:02.735597  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 00:25:02.790026  302471 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0924 00:25:02.790092  302471 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0924 00:25:02.810813  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0924 00:25:02.810880  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0924 00:25:02.868465  302471 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0924 00:25:02.868538  302471 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0924 00:25:02.873988  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0924 00:25:03.004684  302471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0924 00:25:03.004774  302471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0924 00:25:03.071137  302471 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0924 00:25:03.071215  302471 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0924 00:25:03.083735  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0924 00:25:03.086600  302471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 00:25:03.086675  302471 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 00:25:03.100788  302471 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0924 00:25:03.100863  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0924 00:25:03.186253  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0924 00:25:03.186335  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0924 00:25:03.272775  302471 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0924 00:25:03.272853  302471 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0924 00:25:03.335450  302471 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0924 00:25:03.335518  302471 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0924 00:25:03.360640  302471 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0924 00:25:03.360714  302471 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0924 00:25:03.412427  302471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 00:25:03.412502  302471 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 00:25:03.422998  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0924 00:25:03.471144  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0924 00:25:03.471222  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0924 00:25:03.539783  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0924 00:25:03.539861  302471 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0924 00:25:03.544384  302471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0924 00:25:03.544465  302471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0924 00:25:03.570732  302471 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0924 00:25:03.570806  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0924 00:25:03.597897  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 00:25:03.717958  302471 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 00:25:03.718033  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0924 00:25:03.786734  302471 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.223584193s)
	I0924 00:25:03.786808  302471 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0924 00:25:03.787911  302471 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.012598619s)
	I0924 00:25:03.788701  302471 node_ready.go:35] waiting up to 6m0s for node "addons-321431" to be "Ready" ...
	I0924 00:25:03.795525  302471 node_ready.go:49] node "addons-321431" has status "Ready":"True"
	I0924 00:25:03.795595  302471 node_ready.go:38] duration metric: took 6.756035ms for node "addons-321431" to be "Ready" ...
	I0924 00:25:03.795620  302471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:25:03.810077  302471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:03.915324  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0924 00:25:03.915351  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0924 00:25:03.919380  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0924 00:25:04.007895  302471 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0924 00:25:04.007929  302471 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0924 00:25:04.047251  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 00:25:04.084719  302471 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0924 00:25:04.084748  302471 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0924 00:25:04.291231  302471 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-321431" context rescaled to 1 replicas
	I0924 00:25:04.422795  302471 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0924 00:25:04.422869  302471 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0924 00:25:04.453032  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0924 00:25:04.453106  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0924 00:25:04.646827  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0924 00:25:04.646898  302471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0924 00:25:04.662193  302471 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 00:25:04.662257  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0924 00:25:04.909477  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.535785355s)
	I0924 00:25:04.909584  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.532223159s)
	I0924 00:25:04.909656  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.260425809s)
	I0924 00:25:04.937772  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0924 00:25:05.057909  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0924 00:25:05.057987  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0924 00:25:05.327014  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0924 00:25:05.327102  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0924 00:25:05.697555  302471 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 00:25:05.697636  302471 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0924 00:25:05.844451  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:06.231441  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0924 00:25:06.654148  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.921892793s)
	I0924 00:25:06.654264  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.931836799s)
	I0924 00:25:06.806282  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.070610966s)
	I0924 00:25:08.325514  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:08.627401  302471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0924 00:25:08.627524  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:08.654351  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:09.134568  302471 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0924 00:25:09.158942  302471 addons.go:234] Setting addon gcp-auth=true in "addons-321431"
	I0924 00:25:09.158992  302471 host.go:66] Checking if "addons-321431" exists ...
	I0924 00:25:09.159433  302471 cli_runner.go:164] Run: docker container inspect addons-321431 --format={{.State.Status}}
	I0924 00:25:09.186948  302471 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0924 00:25:09.187000  302471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-321431
	I0924 00:25:09.223435  302471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/addons-321431/id_rsa Username:docker}
	I0924 00:25:10.333222  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:10.396668  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (7.522591537s)
	I0924 00:25:10.396700  302471 addons.go:475] Verifying addon ingress=true in "addons-321431"
	I0924 00:25:10.400830  302471 out.go:177] * Verifying ingress addon...
	I0924 00:25:10.409214  302471 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0924 00:25:10.413573  302471 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0924 00:25:10.413650  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:10.940956  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:11.448093  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:11.985050  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.901222934s)
	I0924 00:25:11.985156  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.562083299s)
	I0924 00:25:11.985198  302471 addons.go:475] Verifying addon registry=true in "addons-321431"
	I0924 00:25:11.985410  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.387419747s)
	I0924 00:25:11.985449  302471 addons.go:475] Verifying addon metrics-server=true in "addons-321431"
	I0924 00:25:11.985522  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.06610122s)
	I0924 00:25:11.985759  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.938469025s)
	W0924 00:25:11.985812  302471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 00:25:11.985835  302471 retry.go:31] will retry after 260.948058ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0924 00:25:11.985911  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.048068176s)
	I0924 00:25:11.987579  302471 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-321431 service yakd-dashboard -n yakd-dashboard
	
	I0924 00:25:11.987579  302471 out.go:177] * Verifying registry addon...
	I0924 00:25:11.990672  302471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0924 00:25:12.024100  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:12.037574  302471 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0924 00:25:12.037615  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:12.247116  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0924 00:25:12.419800  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:12.519340  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:12.778432  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.546901795s)
	I0924 00:25:12.778607  302471 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-321431"
	I0924 00:25:12.778557  302471 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.591588822s)
	I0924 00:25:12.781465  302471 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0924 00:25:12.781599  302471 out.go:177] * Verifying csi-hostpath-driver addon...
	I0924 00:25:12.783928  302471 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0924 00:25:12.784887  302471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0924 00:25:12.786619  302471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0924 00:25:12.786681  302471 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0924 00:25:12.790378  302471 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0924 00:25:12.790435  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:12.821280  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:12.895660  302471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0924 00:25:12.895728  302471 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0924 00:25:12.914086  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:12.936267  302471 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 00:25:12.936333  302471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0924 00:25:12.996147  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:13.067059  302471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0924 00:25:13.290485  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:13.414517  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:13.499321  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:13.790518  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:13.913990  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:13.997550  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:14.202630  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.955416694s)
	I0924 00:25:14.202790  302471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.135600441s)
	I0924 00:25:14.206289  302471 addons.go:475] Verifying addon gcp-auth=true in "addons-321431"
	I0924 00:25:14.210862  302471 out.go:177] * Verifying gcp-auth addon...
	I0924 00:25:14.213851  302471 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0924 00:25:14.216696  302471 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 00:25:14.290826  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:14.413688  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:14.495064  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:14.789424  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:14.914789  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:14.995007  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:15.317860  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:15.320550  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:15.414407  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:15.495749  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:15.790454  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:15.914145  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:16.013837  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:16.291481  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:16.416482  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:16.495458  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:16.793035  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:16.914368  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:16.995334  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:17.291542  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:17.414261  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:17.495139  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:17.790583  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:17.817414  302471 pod_ready.go:103] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"False"
	I0924 00:25:17.914355  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:18.014431  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:18.289377  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:18.415385  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:18.494815  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:18.789800  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:18.914828  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:18.994996  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:19.294309  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:19.317953  302471 pod_ready.go:93] pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.317996  302471 pod_ready.go:82] duration metric: took 15.507635941s for pod "coredns-7c65d6cfc9-5jg6c" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.318009  302471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-g4mjm" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.324177  302471 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-g4mjm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g4mjm" not found
	I0924 00:25:19.324208  302471 pod_ready.go:82] duration metric: took 6.19055ms for pod "coredns-7c65d6cfc9-g4mjm" in "kube-system" namespace to be "Ready" ...
	E0924 00:25:19.324221  302471 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-g4mjm" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-g4mjm" not found
	I0924 00:25:19.324230  302471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.332290  302471 pod_ready.go:93] pod "etcd-addons-321431" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.332320  302471 pod_ready.go:82] duration metric: took 8.082171ms for pod "etcd-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.332337  302471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.339008  302471 pod_ready.go:93] pod "kube-apiserver-addons-321431" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.339033  302471 pod_ready.go:82] duration metric: took 6.687113ms for pod "kube-apiserver-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.339045  302471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.345392  302471 pod_ready.go:93] pod "kube-controller-manager-addons-321431" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.345417  302471 pod_ready.go:82] duration metric: took 6.363751ms for pod "kube-controller-manager-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.345430  302471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hzhsv" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.414123  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:19.495964  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:19.514656  302471 pod_ready.go:93] pod "kube-proxy-hzhsv" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.514688  302471 pod_ready.go:82] duration metric: took 169.247839ms for pod "kube-proxy-hzhsv" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.514718  302471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.790832  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:19.916899  302471 pod_ready.go:93] pod "kube-scheduler-addons-321431" in "kube-system" namespace has status "Ready":"True"
	I0924 00:25:19.916925  302471 pod_ready.go:82] duration metric: took 402.193119ms for pod "kube-scheduler-addons-321431" in "kube-system" namespace to be "Ready" ...
	I0924 00:25:19.916935  302471 pod_ready.go:39] duration metric: took 16.12129071s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 00:25:19.916951  302471 api_server.go:52] waiting for apiserver process to appear ...
	I0924 00:25:19.917015  302471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:25:19.921180  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:19.935949  302471 api_server.go:72] duration metric: took 18.739757411s to wait for apiserver process to appear ...
	I0924 00:25:19.935987  302471 api_server.go:88] waiting for apiserver healthz status ...
	I0924 00:25:19.936010  302471 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0924 00:25:19.945731  302471 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0924 00:25:19.946976  302471 api_server.go:141] control plane version: v1.31.1
	I0924 00:25:19.947007  302471 api_server.go:131] duration metric: took 11.01106ms to wait for apiserver health ...
	I0924 00:25:19.947016  302471 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 00:25:19.997713  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:20.122403  302471 system_pods.go:59] 18 kube-system pods found
	I0924 00:25:20.122441  302471 system_pods.go:61] "coredns-7c65d6cfc9-5jg6c" [b0b5cd2d-3dd0-4838-96f7-8575ef052a11] Running
	I0924 00:25:20.122453  302471 system_pods.go:61] "csi-hostpath-attacher-0" [e393f71e-9b9a-419a-a444-347b82953167] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 00:25:20.122460  302471 system_pods.go:61] "csi-hostpath-resizer-0" [3b30ca1c-788c-4795-9fff-3d6faa0ba4ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 00:25:20.122469  302471 system_pods.go:61] "csi-hostpathplugin-cnvzx" [07e442eb-1391-48a1-88e2-d047f22c84e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 00:25:20.122477  302471 system_pods.go:61] "etcd-addons-321431" [e14d67e8-05bf-4eaf-b0ce-2df67e7d70a1] Running
	I0924 00:25:20.122483  302471 system_pods.go:61] "kindnet-fszns" [8e2a7067-66bb-45f2-a570-c7e008140723] Running
	I0924 00:25:20.122487  302471 system_pods.go:61] "kube-apiserver-addons-321431" [82bf95f4-e064-4f98-8a60-1d8ce0eb74b0] Running
	I0924 00:25:20.122491  302471 system_pods.go:61] "kube-controller-manager-addons-321431" [cd9c0e35-f15c-44b7-b7f2-fde25dbc8446] Running
	I0924 00:25:20.122496  302471 system_pods.go:61] "kube-ingress-dns-minikube" [6f075537-af33-45d1-ae36-aca169bdd3a8] Running
	I0924 00:25:20.122500  302471 system_pods.go:61] "kube-proxy-hzhsv" [713066f0-3c9b-4dd4-b8ce-839d2bdffa78] Running
	I0924 00:25:20.122505  302471 system_pods.go:61] "kube-scheduler-addons-321431" [78913301-4da4-4431-a685-3d00a398b713] Running
	I0924 00:25:20.122511  302471 system_pods.go:61] "metrics-server-84c5f94fbc-mdzrd" [b72987f5-368f-4a5a-856e-a80311906c88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 00:25:20.122519  302471 system_pods.go:61] "nvidia-device-plugin-daemonset-ngbxz" [8a3a3c47-bc68-4829-847f-7b602161033b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0924 00:25:20.122525  302471 system_pods.go:61] "registry-66c9cd494c-r8jb7" [01589e47-58af-41e9-8d33-bcfce48c058f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 00:25:20.122535  302471 system_pods.go:61] "registry-proxy-2xfkh" [d06ba723-4918-49cb-be93-a014b45358bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 00:25:20.122541  302471 system_pods.go:61] "snapshot-controller-56fcc65765-sn2rw" [b845848a-08ac-4823-b8ff-1ca061714bb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 00:25:20.122548  302471 system_pods.go:61] "snapshot-controller-56fcc65765-z2jzm" [2ebb7486-85c8-4ded-a695-ef80d3cef8e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 00:25:20.122552  302471 system_pods.go:61] "storage-provisioner" [7c6ca4ca-02e4-4a0d-8a6a-a4e95eb6cc67] Running
	I0924 00:25:20.122558  302471 system_pods.go:74] duration metric: took 175.536497ms to wait for pod list to return data ...
	I0924 00:25:20.122603  302471 default_sa.go:34] waiting for default service account to be created ...
	I0924 00:25:20.291821  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:20.316080  302471 default_sa.go:45] found service account: "default"
	I0924 00:25:20.316112  302471 default_sa.go:55] duration metric: took 193.497864ms for default service account to be created ...
	I0924 00:25:20.316123  302471 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 00:25:20.416951  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:20.517515  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:20.526247  302471 system_pods.go:86] 18 kube-system pods found
	I0924 00:25:20.526286  302471 system_pods.go:89] "coredns-7c65d6cfc9-5jg6c" [b0b5cd2d-3dd0-4838-96f7-8575ef052a11] Running
	I0924 00:25:20.526300  302471 system_pods.go:89] "csi-hostpath-attacher-0" [e393f71e-9b9a-419a-a444-347b82953167] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0924 00:25:20.526309  302471 system_pods.go:89] "csi-hostpath-resizer-0" [3b30ca1c-788c-4795-9fff-3d6faa0ba4ff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0924 00:25:20.526317  302471 system_pods.go:89] "csi-hostpathplugin-cnvzx" [07e442eb-1391-48a1-88e2-d047f22c84e6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0924 00:25:20.526322  302471 system_pods.go:89] "etcd-addons-321431" [e14d67e8-05bf-4eaf-b0ce-2df67e7d70a1] Running
	I0924 00:25:20.526326  302471 system_pods.go:89] "kindnet-fszns" [8e2a7067-66bb-45f2-a570-c7e008140723] Running
	I0924 00:25:20.526333  302471 system_pods.go:89] "kube-apiserver-addons-321431" [82bf95f4-e064-4f98-8a60-1d8ce0eb74b0] Running
	I0924 00:25:20.526341  302471 system_pods.go:89] "kube-controller-manager-addons-321431" [cd9c0e35-f15c-44b7-b7f2-fde25dbc8446] Running
	I0924 00:25:20.526346  302471 system_pods.go:89] "kube-ingress-dns-minikube" [6f075537-af33-45d1-ae36-aca169bdd3a8] Running
	I0924 00:25:20.526356  302471 system_pods.go:89] "kube-proxy-hzhsv" [713066f0-3c9b-4dd4-b8ce-839d2bdffa78] Running
	I0924 00:25:20.526360  302471 system_pods.go:89] "kube-scheduler-addons-321431" [78913301-4da4-4431-a685-3d00a398b713] Running
	I0924 00:25:20.526366  302471 system_pods.go:89] "metrics-server-84c5f94fbc-mdzrd" [b72987f5-368f-4a5a-856e-a80311906c88] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 00:25:20.526376  302471 system_pods.go:89] "nvidia-device-plugin-daemonset-ngbxz" [8a3a3c47-bc68-4829-847f-7b602161033b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0924 00:25:20.526383  302471 system_pods.go:89] "registry-66c9cd494c-r8jb7" [01589e47-58af-41e9-8d33-bcfce48c058f] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0924 00:25:20.526394  302471 system_pods.go:89] "registry-proxy-2xfkh" [d06ba723-4918-49cb-be93-a014b45358bd] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0924 00:25:20.526401  302471 system_pods.go:89] "snapshot-controller-56fcc65765-sn2rw" [b845848a-08ac-4823-b8ff-1ca061714bb2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 00:25:20.526410  302471 system_pods.go:89] "snapshot-controller-56fcc65765-z2jzm" [2ebb7486-85c8-4ded-a695-ef80d3cef8e5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0924 00:25:20.526415  302471 system_pods.go:89] "storage-provisioner" [7c6ca4ca-02e4-4a0d-8a6a-a4e95eb6cc67] Running
	I0924 00:25:20.526422  302471 system_pods.go:126] duration metric: took 210.290099ms to wait for k8s-apps to be running ...
	I0924 00:25:20.526435  302471 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 00:25:20.526491  302471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:25:20.541779  302471 system_svc.go:56] duration metric: took 15.3349ms WaitForService to wait for kubelet
	I0924 00:25:20.541813  302471 kubeadm.go:582] duration metric: took 19.345625435s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 00:25:20.541833  302471 node_conditions.go:102] verifying NodePressure condition ...
	I0924 00:25:20.714684  302471 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0924 00:25:20.714713  302471 node_conditions.go:123] node cpu capacity is 2
	I0924 00:25:20.714725  302471 node_conditions.go:105] duration metric: took 172.886252ms to run NodePressure ...
	I0924 00:25:20.714737  302471 start.go:241] waiting for startup goroutines ...
	I0924 00:25:20.789458  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:20.914040  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:20.994859  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:21.289655  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:21.418018  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:21.495589  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:21.789995  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:21.914215  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:21.995032  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:22.290793  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:22.413517  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:22.498055  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:22.789804  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:22.914467  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:22.995712  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:23.289636  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:23.414290  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:23.495260  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:23.790564  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:23.913984  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:23.994539  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:24.290243  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:24.413615  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:24.494418  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:24.790427  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:24.914612  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:24.995920  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:25.291867  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:25.416606  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:25.495114  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:25.790723  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:25.914055  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:25.995013  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:26.290845  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:26.414366  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:26.495619  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:26.791329  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:26.916120  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:26.995860  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:27.290883  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:27.414878  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:27.495394  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:27.790029  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:27.914880  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:27.995786  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:28.289882  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:28.413852  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:28.494517  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:28.789859  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:28.913954  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:28.994821  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0924 00:25:29.289927  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:29.414423  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:29.495210  302471 kapi.go:107] duration metric: took 17.504512028s to wait for kubernetes.io/minikube-addons=registry ...
	I0924 00:25:29.789584  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:29.918883  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:30.290015  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:30.416984  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:30.790270  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:30.913764  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:31.293314  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:31.422868  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:31.801699  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:31.916879  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:32.289519  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:32.416360  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:32.790534  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:32.913540  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:33.289844  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:33.413948  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:33.790111  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:33.916584  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:34.289990  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:34.414120  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:34.790545  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:34.926251  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:35.290182  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:35.414034  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:35.819702  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:35.914306  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:36.291336  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:36.413885  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:36.821375  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:36.913238  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:37.296450  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:37.414847  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:37.820225  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:37.918749  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:38.289632  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:38.414127  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:38.789392  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:38.913657  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:39.289514  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:39.413997  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:39.820689  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:39.919608  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:40.325721  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:40.414444  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:40.809633  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:40.914670  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:41.294172  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:41.427236  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:41.791337  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:41.915248  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:42.293458  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:42.415927  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:42.823527  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:42.921392  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:43.292007  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:43.414653  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:43.790151  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:43.914346  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:44.294490  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:44.414506  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:44.823613  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:44.920721  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:45.323946  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:45.421865  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:45.789747  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:45.913908  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:46.326535  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:46.421962  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:46.790292  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:46.913662  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:47.291026  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:47.414414  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:47.790700  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:47.929008  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:48.319870  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:48.416062  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:48.790241  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:48.914188  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:49.290360  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:49.427423  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:49.790540  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:49.914013  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:50.290606  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:50.413874  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:50.819879  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:50.914291  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:51.292824  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:51.418562  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:51.790650  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:51.914051  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:52.290973  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:52.481566  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:52.790344  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:52.914091  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:53.290568  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:53.413750  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:53.791140  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:53.914082  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:54.290633  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:54.414060  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:54.792229  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:54.914646  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:55.290634  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:55.420026  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:55.820393  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:55.914733  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:56.321552  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:56.414344  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:56.789690  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:56.914261  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:57.289953  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:57.413565  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:57.790431  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:57.913988  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:58.324967  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:58.414300  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:58.789865  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:58.914179  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:59.302154  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:59.413386  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:25:59.820219  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:25:59.919949  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:00.332259  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:00.418060  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:00.810501  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:00.914672  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:01.290684  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:01.414966  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:01.790489  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:01.913497  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:02.289895  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:02.415553  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:02.789477  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:02.921944  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:03.320563  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0924 00:26:03.414277  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:03.790459  302471 kapi.go:107] duration metric: took 51.005569965s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0924 00:26:03.914773  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:04.414070  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:04.914327  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:05.414332  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:05.913762  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:06.414131  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:06.914434  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:07.413776  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:07.914065  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:08.414534  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:08.913903  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:09.414067  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:09.914985  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:10.414101  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:10.914824  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:11.414293  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:11.913673  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:12.416517  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:12.914485  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:13.413626  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:13.914646  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:14.417797  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:14.915170  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:15.413829  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:15.913633  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:16.416563  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:16.914136  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:17.414690  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:17.914197  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:18.414433  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:18.914161  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:19.414529  302471 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0924 00:26:19.914457  302471 kapi.go:107] duration metric: took 1m9.505244744s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0924 00:26:37.217742  302471 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0924 00:26:37.217768  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:37.718162  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:38.218315  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:38.718226  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:39.218499  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:39.717615  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:40.217888  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:40.717492  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:41.217778  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:41.717390  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:42.219859  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:42.718172  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:43.217750  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:43.719121  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:44.217256  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:44.718300  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:45.219389  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:45.718323  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:46.217838  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:46.718192  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:47.217625  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:47.717267  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:48.217784  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:48.717628  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:49.217477  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:49.718367  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:50.218236  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:50.718497  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:51.217492  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:51.717888  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:52.217848  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:52.718491  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:53.218030  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:53.718286  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:54.217379  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:54.718778  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:55.218496  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:55.717094  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:56.217684  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:56.717644  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:57.217316  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:57.718010  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:58.218014  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:58.717397  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:59.217617  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:26:59.717759  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:00.236865  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:00.717946  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:01.217873  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:01.717164  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:02.217823  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:02.718055  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:03.218135  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:03.717078  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:04.218120  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:04.718191  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:05.218455  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:05.717153  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:06.218871  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:06.718190  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:07.218180  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:07.717730  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:08.217906  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:08.718443  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:09.217458  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:09.717371  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:10.217478  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:10.717436  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:11.216970  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:11.717108  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:12.218135  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:12.717813  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:13.217825  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:13.718622  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:14.218886  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:14.718208  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:15.218186  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:15.719552  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:16.217625  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:16.719658  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:17.234418  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:17.719338  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:18.218712  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:18.717766  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:19.217661  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:19.717989  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:20.217577  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:20.717241  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:21.218283  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:21.718047  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:22.217951  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:22.717413  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:23.217319  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:23.717985  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:24.218304  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:24.719525  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:25.217707  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:25.717055  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:26.218061  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:26.717206  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:27.218409  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:27.718109  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:28.217659  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:28.718083  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:29.217503  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:29.717282  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:30.217798  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:30.717347  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:31.218243  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:31.717850  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:32.218157  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:32.717902  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:33.218371  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:33.716871  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:34.218373  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:34.717350  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:35.217548  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:35.717579  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:36.217759  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:36.717229  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:37.218886  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:37.717104  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:38.218096  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:38.719060  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:39.218051  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:39.717872  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:40.217940  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:40.717682  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:41.218579  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:41.718328  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:42.218227  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:42.717980  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:43.217859  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:43.717844  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:44.217267  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:44.718630  302471 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0924 00:27:45.218445  302471 kapi.go:107] duration metric: took 2m31.00459058s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0924 00:27:45.220577  302471 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-321431 cluster.
	I0924 00:27:45.223588  302471 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0924 00:27:45.226468  302471 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0924 00:27:45.228629  302471 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0924 00:27:45.230523  302471 addons.go:510] duration metric: took 2m44.034028201s for enable addons: enabled=[cloud-spanner nvidia-device-plugin default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner volcano metrics-server inspektor-gadget yakd volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0924 00:27:45.230596  302471 start.go:246] waiting for cluster config update ...
	I0924 00:27:45.230629  302471 start.go:255] writing updated cluster config ...
	I0924 00:27:45.231074  302471 ssh_runner.go:195] Run: rm -f paused
	I0924 00:27:45.623255  302471 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 00:27:45.625774  302471 out.go:177] * Done! kubectl is now configured to use "addons-321431" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	5870c979770c7       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   fc62d03472813       gadget-f92dm
	2c6a3084c0365       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   fc5040268c9d4       gcp-auth-89d5ffd79-8mdbz
	6d3e6f440ddf2       8b46b1cd48760       4 minutes ago       Running             admission                                0                   63b0ce684ebd9       volcano-admission-77d7d48b68-6vsl6
	8fdc54f1ef9bb       289a818c8d9c5       4 minutes ago       Running             controller                               0                   8112274191d8d       ingress-nginx-controller-bc57996ff-mxfl9
	381e0821e3046       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	0a5fc6cd8ce1a       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	42edb74b590d3       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	c8bdb10233076       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	44138feff075b       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	2f28998848c48       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   525a86912ea38       csi-hostpath-attacher-0
	a553701fde932       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   129bff444fbf7       csi-hostpathplugin-cnvzx
	ad13d5af354ff       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   758c9ef57f239       volcano-controllers-56675bb4d5-stspk
	6ec7e2843f665       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   0ea20f4606fc4       volcano-scheduler-576bc46687-5889h
	af22425590bcc       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   2402246d06aec       snapshot-controller-56fcc65765-sn2rw
	77adce99ad554       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   e005cdcbe36eb       csi-hostpath-resizer-0
	af9817f1a7201       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   93449df04d5c6       snapshot-controller-56fcc65765-z2jzm
	fb71002937702       420193b27261a       5 minutes ago       Exited              patch                                    1                   eccb14e6d97e4       ingress-nginx-admission-patch-klq4g
	a2fc1d9b7732a       420193b27261a       5 minutes ago       Exited              create                                   0                   0fdba9a0d94fb       ingress-nginx-admission-create-z5tsq
	71462049d4b49       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   bc9ae6d80e5e7       metrics-server-84c5f94fbc-mdzrd
	d6fe840f880d9       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   2e7e313f2381a       local-path-provisioner-86d989889c-trcsg
	52c32f886d9f2       77bdba588b953       5 minutes ago       Running             yakd                                     0                   05329bce6bb2d       yakd-dashboard-67d98fc6b-cmn8w
	a52ba9705aabc       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   f2b8c094eb47e       nvidia-device-plugin-daemonset-ngbxz
	83994479174cc       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   9c8bc6f295403       registry-proxy-2xfkh
	dd4ff5cea1794       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   24328244969dd       registry-66c9cd494c-r8jb7
	c3902a00f84ae       be9cac3585579       5 minutes ago       Running             cloud-spanner-emulator                   0                   15296704e3174       cloud-spanner-emulator-5b584cc74-ff59c
	d669df99d448b       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   04236d105607c       coredns-7c65d6cfc9-5jg6c
	bd5f28c3ab19d       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   be9bbc41de6b5       kube-ingress-dns-minikube
	13ea99485e6fc       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   121f94b11b33f       storage-provisioner
	216f30c3d51b6       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   bc43cb92f6d0b       kindnet-fszns
	002dafc952f88       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   5ddffece6fab1       kube-proxy-hzhsv
	3dc76c6f4a25c       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   7f5415fc8e8e2       kube-scheduler-addons-321431
	4215b7b0d8a7a       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   ab483430364fe       kube-apiserver-addons-321431
	949c19abd2fa7       27e3830e14027       6 minutes ago       Running             etcd                                     0                   b3cbb54bb51ad       etcd-addons-321431
	3895c92019ca4       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   229bd3843a8f7       kube-controller-manager-addons-321431
	
	
	==> containerd <==
	Sep 24 00:27:56 addons-321431 containerd[812]: time="2024-09-24T00:27:56.921231377Z" level=info msg="RemovePodSandbox \"2ad9eea0bc59a80f9d68d43fdada7a0fc4e6ccc2fa022946d84e8329777f69d9\" returns successfully"
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.825499573Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.951403889Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.953248602Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.956831871Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 131.273312ms"
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.956893352Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.959107859Z" level=info msg="CreateContainer within sandbox \"fc62d0347281343e866d90c68c4a9a42cfdc521a4a414fef96203c80f2f09aed\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.983089582Z" level=info msg="CreateContainer within sandbox \"fc62d0347281343e866d90c68c4a9a42cfdc521a4a414fef96203c80f2f09aed\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865\""
	Sep 24 00:28:49 addons-321431 containerd[812]: time="2024-09-24T00:28:49.983875136Z" level=info msg="StartContainer for \"5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865\""
	Sep 24 00:28:50 addons-321431 containerd[812]: time="2024-09-24T00:28:50.064569717Z" level=info msg="StartContainer for \"5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865\" returns successfully"
	Sep 24 00:28:51 addons-321431 containerd[812]: time="2024-09-24T00:28:51.743738460Z" level=info msg="shim disconnected" id=5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865 namespace=k8s.io
	Sep 24 00:28:51 addons-321431 containerd[812]: time="2024-09-24T00:28:51.743815670Z" level=warning msg="cleaning up after shim disconnected" id=5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865 namespace=k8s.io
	Sep 24 00:28:51 addons-321431 containerd[812]: time="2024-09-24T00:28:51.743827961Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 24 00:28:52 addons-321431 containerd[812]: time="2024-09-24T00:28:52.041600545Z" level=info msg="RemoveContainer for \"536a6ac1bcd1dc83021c9ecb26f8955ebef6a51ddf88f26041e32698e97a5ae1\""
	Sep 24 00:28:52 addons-321431 containerd[812]: time="2024-09-24T00:28:52.049067890Z" level=info msg="RemoveContainer for \"536a6ac1bcd1dc83021c9ecb26f8955ebef6a51ddf88f26041e32698e97a5ae1\" returns successfully"
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.926846305Z" level=info msg="RemoveContainer for \"95d47feb5f18c2cf8badfaed6dd0249ed577f82ba6f9887c876a75fe273812c4\""
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.933693428Z" level=info msg="RemoveContainer for \"95d47feb5f18c2cf8badfaed6dd0249ed577f82ba6f9887c876a75fe273812c4\" returns successfully"
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.935911972Z" level=info msg="StopPodSandbox for \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\""
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.943637555Z" level=info msg="TearDown network for sandbox \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\" successfully"
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.943679598Z" level=info msg="StopPodSandbox for \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\" returns successfully"
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.944205994Z" level=info msg="RemovePodSandbox for \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\""
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.944255783Z" level=info msg="Forcibly stopping sandbox \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\""
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.954399515Z" level=info msg="TearDown network for sandbox \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\" successfully"
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.960954472Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 24 00:28:56 addons-321431 containerd[812]: time="2024-09-24T00:28:56.961067407Z" level=info msg="RemovePodSandbox \"01fc1ddd6e393e10d083b53833d8e2bfa103fd1ad5a866752fef2ec673eb15b9\" returns successfully"
	
	
	==> coredns [d669df99d448b04c2186b4099fe3a9d79cd04bf03fa09353378db67d2ba9186f] <==
	[INFO] 10.244.0.5:44998 - 53456 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000148068s
	[INFO] 10.244.0.5:49996 - 22962 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002640153s
	[INFO] 10.244.0.5:49996 - 58800 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.004034957s
	[INFO] 10.244.0.5:60796 - 58217 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000120582s
	[INFO] 10.244.0.5:60796 - 17772 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000187453s
	[INFO] 10.244.0.5:34513 - 65523 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000095761s
	[INFO] 10.244.0.5:34513 - 52476 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000113443s
	[INFO] 10.244.0.5:43605 - 28867 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000048311s
	[INFO] 10.244.0.5:43605 - 52161 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000116208s
	[INFO] 10.244.0.5:53434 - 28955 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000047318s
	[INFO] 10.244.0.5:53434 - 2821 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000035233s
	[INFO] 10.244.0.5:41836 - 63276 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001899867s
	[INFO] 10.244.0.5:41836 - 3886 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001485584s
	[INFO] 10.244.0.5:33552 - 54865 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067084s
	[INFO] 10.244.0.5:33552 - 3414 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077603s
	[INFO] 10.244.0.24:56264 - 28411 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000173209s
	[INFO] 10.244.0.24:57733 - 34631 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000120524s
	[INFO] 10.244.0.24:38335 - 36846 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000119376s
	[INFO] 10.244.0.24:36922 - 29265 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124758s
	[INFO] 10.244.0.24:49852 - 62079 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095581s
	[INFO] 10.244.0.24:56747 - 40 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000088s
	[INFO] 10.244.0.24:60081 - 55315 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002133023s
	[INFO] 10.244.0.24:44242 - 8326 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002752284s
	[INFO] 10.244.0.24:55802 - 26302 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001759832s
	[INFO] 10.244.0.24:53366 - 17751 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002133933s
	
	
	==> describe nodes <==
	Name:               addons-321431
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-321431
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=addons-321431
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T00_24_57_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-321431
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-321431"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 00:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-321431
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 00:30:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 00:28:00 +0000   Tue, 24 Sep 2024 00:24:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 00:28:00 +0000   Tue, 24 Sep 2024 00:24:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 00:28:00 +0000   Tue, 24 Sep 2024 00:24:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 00:28:00 +0000   Tue, 24 Sep 2024 00:24:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-321431
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 8d246b8d50854ac99b0f250ab80a3362
	  System UUID:                4fd55668-275b-44f5-aedb-99dba7098f6e
	  Boot ID:                    e579fd69-d9d0-4441-8d26-00b8ee3b7574
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-5b584cc74-ff59c      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-f92dm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-8mdbz                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-mxfl9    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-5jg6c                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-cnvzx                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-321431                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m7s
	  kube-system                 kindnet-fszns                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-321431                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-controller-manager-addons-321431       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-hzhsv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-321431                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m7s
	  kube-system                 metrics-server-84c5f94fbc-mdzrd             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-ngbxz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-r8jb7                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-2xfkh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-sn2rw        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-z2jzm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-trcsg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-77d7d48b68-6vsl6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-stspk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-5889h          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-cmn8w              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-321431 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-321431 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-321431 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-321431 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-321431 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-321431 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m3s                   node-controller  Node addons-321431 event: Registered Node addons-321431 in Controller
	
	
	==> dmesg <==
	[Sep23 22:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015091] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.400985] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.739806] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.757906] kauditd_printk_skb: 36 callbacks suppressed
	[Sep23 23:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [949c19abd2fa7e119fa3757d0a26074843ecd6b46cdcf672d61f21fb1c3c7131] <==
	{"level":"info","ts":"2024-09-24T00:24:50.669105Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-24T00:24:50.669261Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-24T00:24:50.670946Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-24T00:24:50.672103Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-24T00:24:50.672175Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-24T00:24:51.062976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-24T00:24:51.063197Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-24T00:24:51.063315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-24T00:24:51.063473Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-24T00:24:51.063562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T00:24:51.063644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-24T00:24:51.063733Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-24T00:24:51.067172Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-321431 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-24T00:24:51.067290Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:24:51.067691Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-24T00:24:51.068594Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:24:51.070081Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-24T00:24:51.071938Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-24T00:24:51.073968Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:24:51.074455Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-24T00:24:51.080565Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-24T00:24:51.080754Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-24T00:24:51.080895Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:24:51.081243Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-24T00:24:51.081413Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> gcp-auth [2c6a3084c0365f4f6a2ed58c908a961da052a2ef0b38e489246ffd71191bbe02] <==
	2024/09/24 00:27:44 GCP Auth Webhook started!
	2024/09/24 00:28:02 Ready to marshal response ...
	2024/09/24 00:28:02 Ready to write response ...
	2024/09/24 00:28:03 Ready to marshal response ...
	2024/09/24 00:28:03 Ready to write response ...
	
	
	==> kernel <==
	 00:31:04 up  2:13,  0 users,  load average: 0.11, 1.38, 2.27
	Linux addons-321431 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [216f30c3d51b60d310300879ffb1c4a88936c068086b34c28fd99d6524695dab] <==
	I0924 00:29:03.604272       1 main.go:299] handling current node
	I0924 00:29:13.609679       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:29:13.609715       1 main.go:299] handling current node
	I0924 00:29:23.603816       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:29:23.604023       1 main.go:299] handling current node
	I0924 00:29:33.603645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:29:33.603692       1 main.go:299] handling current node
	I0924 00:29:43.611593       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:29:43.611632       1 main.go:299] handling current node
	I0924 00:29:53.612701       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:29:53.612745       1 main.go:299] handling current node
	I0924 00:30:03.604490       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:03.604530       1 main.go:299] handling current node
	I0924 00:30:13.610989       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:13.611033       1 main.go:299] handling current node
	I0924 00:30:23.611043       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:23.611140       1 main.go:299] handling current node
	I0924 00:30:33.604241       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:33.604281       1 main.go:299] handling current node
	I0924 00:30:43.611051       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:43.611091       1 main.go:299] handling current node
	I0924 00:30:53.611495       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:30:53.611701       1 main.go:299] handling current node
	I0924 00:31:03.603738       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0924 00:31:03.603772       1 main.go:299] handling current node
	
	
	==> kube-apiserver [4215b7b0d8a7a9df96ee5abcc5fce2893f917876ed7c54ecc4327ede030bdcce] <==
	W0924 00:26:15.254523       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:16.315251       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:17.116822       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.119.228:443: connect: connection refused
	E0924 00:26:17.116865       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.119.228:443: connect: connection refused" logger="UnhandledError"
	W0924 00:26:17.118693       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:17.185209       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.119.228:443: connect: connection refused
	E0924 00:26:17.185250       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.119.228:443: connect: connection refused" logger="UnhandledError"
	W0924 00:26:17.187074       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:17.381947       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:18.426759       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:19.439712       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:20.514938       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:21.584869       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:22.660475       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:23.691144       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:24.773071       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:25.852948       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.108.187.125:443: connect: connection refused
	W0924 00:26:37.080491       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.119.228:443: connect: connection refused
	E0924 00:26:37.080537       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.119.228:443: connect: connection refused" logger="UnhandledError"
	W0924 00:27:17.134188       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.119.228:443: connect: connection refused
	E0924 00:27:17.134232       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.119.228:443: connect: connection refused" logger="UnhandledError"
	W0924 00:27:17.196704       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.119.228:443: connect: connection refused
	E0924 00:27:17.196759       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.119.228:443: connect: connection refused" logger="UnhandledError"
	I0924 00:28:02.210675       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0924 00:28:02.271344       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [3895c92019ca4c3f01b3ee5def008663433caf0e76d6755f6a3306bdda48d52d] <==
	I0924 00:27:17.165448       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:17.168260       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:17.181120       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:17.212625       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:17.235880       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:17.236243       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:17.244320       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:18.753025       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:18.765260       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:19.900641       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:19.924816       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:20.908714       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:20.917300       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:20.923125       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0924 00:27:20.933806       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:20.942724       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:20.948618       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0924 00:27:44.850446       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.017259ms"
	I0924 00:27:44.850771       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="72.065µs"
	I0924 00:27:50.049006       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0924 00:27:50.049590       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0924 00:27:50.127869       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0924 00:27:50.147401       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0924 00:28:00.934519       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-321431"
	I0924 00:28:01.893190       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [002dafc952f88d2ab3367c7dee7618cd113108b30302aa14a7d2183691f7787e] <==
	I0924 00:25:02.883818       1 server_linux.go:66] "Using iptables proxy"
	I0924 00:25:02.967496       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0924 00:25:02.967563       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0924 00:25:03.032786       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0924 00:25:03.032853       1 server_linux.go:169] "Using iptables Proxier"
	I0924 00:25:03.034887       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0924 00:25:03.035388       1 server.go:483] "Version info" version="v1.31.1"
	I0924 00:25:03.035406       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0924 00:25:03.048879       1 config.go:199] "Starting service config controller"
	I0924 00:25:03.048929       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0924 00:25:03.048988       1 config.go:105] "Starting endpoint slice config controller"
	I0924 00:25:03.048993       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0924 00:25:03.056586       1 config.go:328] "Starting node config controller"
	I0924 00:25:03.056612       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0924 00:25:03.149113       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0924 00:25:03.149208       1 shared_informer.go:320] Caches are synced for service config
	I0924 00:25:03.157193       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3dc76c6f4a25c3ae96dd20caa0bb9604f048648d7157d32c7870f61487e7f7b5] <==
	W0924 00:24:54.805540       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 00:24:54.805563       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.805712       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 00:24:54.805734       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0924 00:24:54.806960       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 00:24:54.807007       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.807908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0924 00:24:54.807949       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.810742       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 00:24:54.810791       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.810878       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 00:24:54.810898       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.811013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 00:24:54.811033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.811106       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 00:24:54.811123       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.811180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 00:24:54.811196       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.811716       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0924 00:24:54.811758       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.811834       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 00:24:54.811854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0924 00:24:54.812397       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0924 00:24:54.812429       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0924 00:24:56.102389       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 24 00:29:05 addons-321431 kubelet[1463]: I0924 00:29:05.823964    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:29:05 addons-321431 kubelet[1463]: E0924 00:29:05.824147    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:29:16 addons-321431 kubelet[1463]: I0924 00:29:16.823915    1463 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-5jg6c" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 00:29:17 addons-321431 kubelet[1463]: I0924 00:29:17.823594    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:29:17 addons-321431 kubelet[1463]: E0924 00:29:17.824009    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:29:29 addons-321431 kubelet[1463]: I0924 00:29:29.823404    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:29:29 addons-321431 kubelet[1463]: E0924 00:29:29.824102    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:29:42 addons-321431 kubelet[1463]: I0924 00:29:42.824114    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:29:42 addons-321431 kubelet[1463]: E0924 00:29:42.824326    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:29:45 addons-321431 kubelet[1463]: I0924 00:29:45.823643    1463 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2xfkh" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 00:29:53 addons-321431 kubelet[1463]: I0924 00:29:53.824109    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:29:53 addons-321431 kubelet[1463]: E0924 00:29:53.824937    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:30:07 addons-321431 kubelet[1463]: I0924 00:30:07.823657    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:30:07 addons-321431 kubelet[1463]: E0924 00:30:07.824338    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:30:11 addons-321431 kubelet[1463]: I0924 00:30:11.823748    1463 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-ngbxz" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 00:30:12 addons-321431 kubelet[1463]: I0924 00:30:12.823541    1463 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-r8jb7" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 00:30:22 addons-321431 kubelet[1463]: I0924 00:30:22.823857    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:30:22 addons-321431 kubelet[1463]: E0924 00:30:22.824072    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:30:26 addons-321431 kubelet[1463]: I0924 00:30:26.825092    1463 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-7c65d6cfc9-5jg6c" secret="" err="secret \"gcp-auth\" not found"
	Sep 24 00:30:34 addons-321431 kubelet[1463]: I0924 00:30:34.823343    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:30:34 addons-321431 kubelet[1463]: E0924 00:30:34.823544    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:30:48 addons-321431 kubelet[1463]: I0924 00:30:48.824106    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:30:48 addons-321431 kubelet[1463]: E0924 00:30:48.824311    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	Sep 24 00:31:03 addons-321431 kubelet[1463]: I0924 00:31:03.823816    1463 scope.go:117] "RemoveContainer" containerID="5870c979770c7beaba20e7e12fd249c8cb2f270ef808f79b2f65bf9397ed5865"
	Sep 24 00:31:03 addons-321431 kubelet[1463]: E0924 00:31:03.824154    1463 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-f92dm_gadget(fa6798a0-d84e-4378-9827-737567976910)\"" pod="gadget/gadget-f92dm" podUID="fa6798a0-d84e-4378-9827-737567976910"
	
	
	==> storage-provisioner [13ea99485e6fc56e18f6872b1836b0230e0876321814bed4c71ad58b75d9d69f] <==
	I0924 00:25:07.806643       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 00:25:07.843909       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 00:25:07.843950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 00:25:07.854140       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 00:25:07.854346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-321431_40195fde-242e-4c87-aac5-953255143b58!
	I0924 00:25:07.854418       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a98c0a3-763a-4fb0-902e-4d5eac288453", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-321431_40195fde-242e-4c87-aac5-953255143b58 became leader
	I0924 00:25:07.959055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-321431_40195fde-242e-4c87-aac5-953255143b58!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-321431 -n addons-321431
helpers_test.go:261: (dbg) Run:  kubectl --context addons-321431 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-z5tsq ingress-nginx-admission-patch-klq4g test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-321431 describe pod ingress-nginx-admission-create-z5tsq ingress-nginx-admission-patch-klq4g test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-321431 describe pod ingress-nginx-admission-create-z5tsq ingress-nginx-admission-patch-klq4g test-job-nginx-0: exit status 1 (86.592618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z5tsq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-klq4g" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-321431 describe pod ingress-nginx-admission-create-z5tsq ingress-nginx-admission-patch-klq4g test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (381.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-654890 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-654890 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m17.122001464s)

                                                
                                                
-- stdout --
	* [old-k8s-version-654890] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-654890" primary control-plane node in "old-k8s-version-654890" cluster
	* Pulling base image v0.0.45-1727108449-19696 ...
	* Restarting existing docker container for "old-k8s-version-654890" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-654890 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 01:13:54.204124  503471 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:13:54.204329  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:13:54.204350  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:13:54.204369  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:13:54.204651  503471 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 01:13:54.205051  503471 out.go:352] Setting JSON to false
	I0924 01:13:54.206073  503471 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10580,"bootTime":1727129855,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 01:13:54.206164  503471 start.go:139] virtualization:  
	I0924 01:13:54.208866  503471 out.go:177] * [old-k8s-version-654890] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 01:13:54.211320  503471 notify.go:220] Checking for updates...
	I0924 01:13:54.211917  503471 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:13:54.213608  503471 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:13:54.215428  503471 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:13:54.217309  503471 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 01:13:54.218765  503471 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 01:13:54.220327  503471 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:13:54.222795  503471 config.go:182] Loaded profile config "old-k8s-version-654890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0924 01:13:54.225445  503471 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0924 01:13:54.227074  503471 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:13:54.259027  503471 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 01:13:54.259150  503471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 01:13:54.346326  503471 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-24 01:13:54.327311133 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 01:13:54.346466  503471 docker.go:318] overlay module found
	I0924 01:13:54.348628  503471 out.go:177] * Using the docker driver based on existing profile
	I0924 01:13:54.350249  503471 start.go:297] selected driver: docker
	I0924 01:13:54.350265  503471 start.go:901] validating driver "docker" against &{Name:old-k8s-version-654890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-654890 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:13:54.350378  503471 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:13:54.351028  503471 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 01:13:54.435614  503471 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:true NGoroutines:67 SystemTime:2024-09-24 01:13:54.42562535 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 01:13:54.435992  503471 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:13:54.436017  503471 cni.go:84] Creating CNI manager for ""
	I0924 01:13:54.436062  503471 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 01:13:54.436105  503471 start.go:340] cluster config:
	{Name:old-k8s-version-654890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-654890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:13:54.438067  503471 out.go:177] * Starting "old-k8s-version-654890" primary control-plane node in "old-k8s-version-654890" cluster
	I0924 01:13:54.439649  503471 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 01:13:54.441225  503471 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0924 01:13:54.442798  503471 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 01:13:54.442855  503471 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0924 01:13:54.442864  503471 cache.go:56] Caching tarball of preloaded images
	I0924 01:13:54.443081  503471 preload.go:172] Found /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0924 01:13:54.443091  503471 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0924 01:13:54.443231  503471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/config.json ...
	I0924 01:13:54.443443  503471 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 01:13:54.470462  503471 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0924 01:13:54.470483  503471 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0924 01:13:54.470497  503471 cache.go:194] Successfully downloaded all kic artifacts
	I0924 01:13:54.470522  503471 start.go:360] acquireMachinesLock for old-k8s-version-654890: {Name:mkc8faf84579ab1d3e086b99a1098fa5376cb386 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:13:54.470574  503471 start.go:364] duration metric: took 36.02µs to acquireMachinesLock for "old-k8s-version-654890"
	I0924 01:13:54.470594  503471 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:13:54.470599  503471 fix.go:54] fixHost starting: 
	I0924 01:13:54.470867  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:13:54.495007  503471 fix.go:112] recreateIfNeeded on old-k8s-version-654890: state=Stopped err=<nil>
	W0924 01:13:54.495038  503471 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:13:54.497030  503471 out.go:177] * Restarting existing docker container for "old-k8s-version-654890" ...
	I0924 01:13:54.498737  503471 cli_runner.go:164] Run: docker start old-k8s-version-654890
	I0924 01:13:54.832711  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:13:54.857314  503471 kic.go:430] container "old-k8s-version-654890" state is running.
	I0924 01:13:54.857691  503471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-654890
	I0924 01:13:54.886405  503471 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/config.json ...
	I0924 01:13:54.886642  503471 machine.go:93] provisionDockerMachine start ...
	I0924 01:13:54.886700  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:54.911186  503471 main.go:141] libmachine: Using SSH client type: native
	I0924 01:13:54.911452  503471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0924 01:13:54.911462  503471 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:13:54.914536  503471 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0924 01:13:58.058864  503471 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-654890
	
	I0924 01:13:58.058890  503471 ubuntu.go:169] provisioning hostname "old-k8s-version-654890"
	I0924 01:13:58.058985  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:58.091802  503471 main.go:141] libmachine: Using SSH client type: native
	I0924 01:13:58.092066  503471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0924 01:13:58.092079  503471 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-654890 && echo "old-k8s-version-654890" | sudo tee /etc/hostname
	I0924 01:13:58.276523  503471 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-654890
	
	I0924 01:13:58.276672  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:58.314596  503471 main.go:141] libmachine: Using SSH client type: native
	I0924 01:13:58.314847  503471 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I0924 01:13:58.314865  503471 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-654890' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-654890/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-654890' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:13:58.474948  503471 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:13:58.474979  503471 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19696-296322/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-296322/.minikube}
	I0924 01:13:58.475005  503471 ubuntu.go:177] setting up certificates
	I0924 01:13:58.475015  503471 provision.go:84] configureAuth start
	I0924 01:13:58.475079  503471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-654890
	I0924 01:13:58.495861  503471 provision.go:143] copyHostCerts
	I0924 01:13:58.495929  503471 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem, removing ...
	I0924 01:13:58.495943  503471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem
	I0924 01:13:58.496022  503471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem (1078 bytes)
	I0924 01:13:58.496134  503471 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem, removing ...
	I0924 01:13:58.496145  503471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem
	I0924 01:13:58.496175  503471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem (1123 bytes)
	I0924 01:13:58.496236  503471 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem, removing ...
	I0924 01:13:58.496245  503471 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem
	I0924 01:13:58.496269  503471 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem (1675 bytes)
	I0924 01:13:58.496322  503471 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-654890 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-654890]
	I0924 01:13:58.832008  503471 provision.go:177] copyRemoteCerts
	I0924 01:13:58.832079  503471 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:13:58.832126  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:58.850965  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:13:58.948792  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 01:13:58.982724  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0924 01:13:59.033314  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:13:59.069782  503471 provision.go:87] duration metric: took 594.749147ms to configureAuth
	I0924 01:13:59.069808  503471 ubuntu.go:193] setting minikube options for container-runtime
	I0924 01:13:59.070019  503471 config.go:182] Loaded profile config "old-k8s-version-654890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0924 01:13:59.070028  503471 machine.go:96] duration metric: took 4.183377528s to provisionDockerMachine
	I0924 01:13:59.070036  503471 start.go:293] postStartSetup for "old-k8s-version-654890" (driver="docker")
	I0924 01:13:59.070050  503471 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:13:59.070111  503471 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:13:59.070163  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:59.100373  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:13:59.206282  503471 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:13:59.215721  503471 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 01:13:59.215762  503471 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 01:13:59.215785  503471 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 01:13:59.215798  503471 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0924 01:13:59.215809  503471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/addons for local assets ...
	I0924 01:13:59.215874  503471 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/files for local assets ...
	I0924 01:13:59.215964  503471 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem -> 3017112.pem in /etc/ssl/certs
	I0924 01:13:59.216072  503471 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:13:59.230989  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem --> /etc/ssl/certs/3017112.pem (1708 bytes)
	I0924 01:13:59.271637  503471 start.go:296] duration metric: took 201.585272ms for postStartSetup
	I0924 01:13:59.271741  503471 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 01:13:59.271788  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:59.304130  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:13:59.404606  503471 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0924 01:13:59.411270  503471 fix.go:56] duration metric: took 4.940661217s for fixHost
	I0924 01:13:59.411297  503471 start.go:83] releasing machines lock for "old-k8s-version-654890", held for 4.940714558s
	I0924 01:13:59.411393  503471 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-654890
	I0924 01:13:59.467129  503471 ssh_runner.go:195] Run: cat /version.json
	I0924 01:13:59.467189  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:59.467413  503471 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:13:59.467488  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:13:59.504819  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:13:59.511041  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:13:59.603516  503471 ssh_runner.go:195] Run: systemctl --version
	I0924 01:13:59.773399  503471 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 01:13:59.778292  503471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0924 01:13:59.807609  503471 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0924 01:13:59.807685  503471 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:13:59.821105  503471 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 01:13:59.821222  503471 start.go:495] detecting cgroup driver to use...
	I0924 01:13:59.821292  503471 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 01:13:59.821392  503471 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 01:13:59.843711  503471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 01:13:59.866127  503471 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:13:59.866240  503471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:13:59.886781  503471 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:13:59.904338  503471 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:14:00.051030  503471 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:14:00.241188  503471 docker.go:233] disabling docker service ...
	I0924 01:14:00.241406  503471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:14:00.261071  503471 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:14:00.276926  503471 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:14:00.437871  503471 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:14:00.583750  503471 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:14:00.604367  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:14:00.626887  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0924 01:14:00.637514  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 01:14:00.653919  503471 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 01:14:00.654049  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 01:14:00.669121  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 01:14:00.680195  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 01:14:00.694572  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 01:14:00.714251  503471 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:14:00.728493  503471 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 01:14:00.742002  503471 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:14:00.756311  503471 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:14:00.768289  503471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:14:00.916897  503471 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 01:14:01.230226  503471 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0924 01:14:01.230383  503471 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0924 01:14:01.239282  503471 start.go:563] Will wait 60s for crictl version
	I0924 01:14:01.239352  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:14:01.243507  503471 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:14:01.331003  503471 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0924 01:14:01.331075  503471 ssh_runner.go:195] Run: containerd --version
	I0924 01:14:01.367419  503471 ssh_runner.go:195] Run: containerd --version
	I0924 01:14:01.404801  503471 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0924 01:14:01.406558  503471 cli_runner.go:164] Run: docker network inspect old-k8s-version-654890 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 01:14:01.428691  503471 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0924 01:14:01.432807  503471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:14:01.447158  503471 kubeadm.go:883] updating cluster {Name:old-k8s-version-654890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-654890 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:14:01.447286  503471 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 01:14:01.447344  503471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:14:01.526877  503471 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 01:14:01.526898  503471 containerd.go:534] Images already preloaded, skipping extraction
	I0924 01:14:01.526974  503471 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:14:01.604332  503471 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 01:14:01.604413  503471 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:14:01.604437  503471 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0924 01:14:01.604595  503471 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-654890 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-654890 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:14:01.604704  503471 ssh_runner.go:195] Run: sudo crictl info
	I0924 01:14:01.666077  503471 cni.go:84] Creating CNI manager for ""
	I0924 01:14:01.666100  503471 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 01:14:01.666110  503471 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:14:01.666130  503471 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-654890 NodeName:old-k8s-version-654890 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0924 01:14:01.666271  503471 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-654890"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:14:01.666335  503471 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0924 01:14:01.676663  503471 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:14:01.676818  503471 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:14:01.692502  503471 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0924 01:14:01.725627  503471 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:14:01.757225  503471 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0924 01:14:01.793192  503471 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0924 01:14:01.796892  503471 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:14:01.813642  503471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:14:01.968893  503471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:14:01.988560  503471 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890 for IP: 192.168.76.2
	I0924 01:14:01.988631  503471 certs.go:194] generating shared ca certs ...
	I0924 01:14:01.988664  503471 certs.go:226] acquiring lock for ca certs: {Name:mk4a6ab65221805436b06c42ec4fde316fe470ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:14:01.988865  503471 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key
	I0924 01:14:01.988951  503471 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key
	I0924 01:14:01.988978  503471 certs.go:256] generating profile certs ...
	I0924 01:14:01.989115  503471 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.key
	I0924 01:14:01.989226  503471 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/apiserver.key.bf70a5a8
	I0924 01:14:01.989306  503471 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/proxy-client.key
	I0924 01:14:01.989462  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711.pem (1338 bytes)
	W0924 01:14:01.989527  503471 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711_empty.pem, impossibly tiny 0 bytes
	I0924 01:14:01.989553  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:14:01.989609  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem (1078 bytes)
	I0924 01:14:01.989680  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:14:01.989725  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem (1675 bytes)
	I0924 01:14:01.989824  503471 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem (1708 bytes)
	I0924 01:14:01.990670  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:14:02.076201  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:14:02.134196  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:14:02.213049  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 01:14:02.276858  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0924 01:14:02.341180  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 01:14:02.401001  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:14:02.440300  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:14:02.476935  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711.pem --> /usr/share/ca-certificates/301711.pem (1338 bytes)
	I0924 01:14:02.517375  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem --> /usr/share/ca-certificates/3017112.pem (1708 bytes)
	I0924 01:14:02.555137  503471 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:14:02.598883  503471 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:14:02.633428  503471 ssh_runner.go:195] Run: openssl version
	I0924 01:14:02.643815  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/301711.pem && ln -fs /usr/share/ca-certificates/301711.pem /etc/ssl/certs/301711.pem"
	I0924 01:14:02.656320  503471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/301711.pem
	I0924 01:14:02.660467  503471 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 00:35 /usr/share/ca-certificates/301711.pem
	I0924 01:14:02.660619  503471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/301711.pem
	I0924 01:14:02.674067  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/301711.pem /etc/ssl/certs/51391683.0"
	I0924 01:14:02.687339  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3017112.pem && ln -fs /usr/share/ca-certificates/3017112.pem /etc/ssl/certs/3017112.pem"
	I0924 01:14:02.704270  503471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3017112.pem
	I0924 01:14:02.708287  503471 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 00:35 /usr/share/ca-certificates/3017112.pem
	I0924 01:14:02.708433  503471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3017112.pem
	I0924 01:14:02.715649  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3017112.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:14:02.729215  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:14:02.744297  503471 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:14:02.754657  503471 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:14:02.754795  503471 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:14:02.767733  503471 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:14:02.785969  503471 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:14:02.789939  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:14:02.801262  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:14:02.811497  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:14:02.818657  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:14:02.830685  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:14:02.843615  503471 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:14:02.854292  503471 kubeadm.go:392] StartCluster: {Name:old-k8s-version-654890 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-654890 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:14:02.854458  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0924 01:14:02.854572  503471 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:14:02.929748  503471 cri.go:89] found id: "e3cc2c47a4cf2f2faddb84e5a279c4f1763c7d9ec5a546753ea5403ac7a5df85"
	I0924 01:14:02.929833  503471 cri.go:89] found id: "ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:14:02.929853  503471 cri.go:89] found id: "321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:14:02.929876  503471 cri.go:89] found id: "5f587d5f804b01bf19560ff7972d8a73b6e2ac92c381a2a9b560dd4e3c01ca76"
	I0924 01:14:02.929908  503471 cri.go:89] found id: "a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:14:02.929931  503471 cri.go:89] found id: "a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:14:02.929950  503471 cri.go:89] found id: "92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:14:02.929968  503471 cri.go:89] found id: "840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:14:02.930000  503471 cri.go:89] found id: "4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:14:02.930026  503471 cri.go:89] found id: ""
	I0924 01:14:02.930113  503471 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0924 01:14:02.954597  503471 cri.go:116] JSON = null
	W0924 01:14:02.954720  503471 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 9
	I0924 01:14:02.954845  503471 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:14:02.968780  503471 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:14:02.968803  503471 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:14:02.968860  503471 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:14:02.978764  503471 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:14:02.979248  503471 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-654890" does not appear in /home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:14:02.979360  503471 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-296322/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-654890" cluster setting kubeconfig missing "old-k8s-version-654890" context setting]
	I0924 01:14:02.979641  503471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/kubeconfig: {Name:mk12cf5f8c4244466c827b22ce4fe2341553290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:14:02.980879  503471 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:14:02.992825  503471 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0924 01:14:02.992857  503471 kubeadm.go:597] duration metric: took 24.047364ms to restartPrimaryControlPlane
	I0924 01:14:02.992866  503471 kubeadm.go:394] duration metric: took 138.58716ms to StartCluster
	I0924 01:14:02.992881  503471 settings.go:142] acquiring lock: {Name:mk1b01c5281da0b61714a1aa76e5632af5b39da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:14:02.992944  503471 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:14:02.993544  503471 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/kubeconfig: {Name:mk12cf5f8c4244466c827b22ce4fe2341553290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:14:02.993732  503471 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 01:14:02.994027  503471 config.go:182] Loaded profile config "old-k8s-version-654890": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0924 01:14:02.994069  503471 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:14:02.994189  503471 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-654890"
	I0924 01:14:02.994218  503471 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-654890"
	W0924 01:14:02.994230  503471 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:14:02.994252  503471 host.go:66] Checking if "old-k8s-version-654890" exists ...
	I0924 01:14:02.994757  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:14:02.995130  503471 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-654890"
	I0924 01:14:02.995152  503471 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-654890"
	I0924 01:14:02.995275  503471 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-654890"
	I0924 01:14:02.995311  503471 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-654890"
	W0924 01:14:02.995324  503471 addons.go:243] addon metrics-server should already be in state true
	I0924 01:14:02.995349  503471 host.go:66] Checking if "old-k8s-version-654890" exists ...
	I0924 01:14:02.995435  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:14:02.995820  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:14:02.997869  503471 addons.go:69] Setting dashboard=true in profile "old-k8s-version-654890"
	I0924 01:14:02.998182  503471 addons.go:234] Setting addon dashboard=true in "old-k8s-version-654890"
	W0924 01:14:02.998264  503471 addons.go:243] addon dashboard should already be in state true
	I0924 01:14:02.998384  503471 host.go:66] Checking if "old-k8s-version-654890" exists ...
	I0924 01:14:03.001793  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:14:02.998102  503471 out.go:177] * Verifying Kubernetes components...
	I0924 01:14:03.008529  503471 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:14:03.059510  503471 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:14:03.062190  503471 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:14:03.062215  503471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:14:03.062285  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:14:03.069493  503471 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-654890"
	W0924 01:14:03.069517  503471 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:14:03.069544  503471 host.go:66] Checking if "old-k8s-version-654890" exists ...
	I0924 01:14:03.069995  503471 cli_runner.go:164] Run: docker container inspect old-k8s-version-654890 --format={{.State.Status}}
	I0924 01:14:03.078896  503471 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0924 01:14:03.081122  503471 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0924 01:14:03.082884  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0924 01:14:03.087364  503471 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0924 01:14:03.087455  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:14:03.103286  503471 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:14:03.107019  503471 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:14:03.107048  503471 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:14:03.107125  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:14:03.132337  503471 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:03.132358  503471 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:14:03.132445  503471 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-654890
	I0924 01:14:03.152072  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:14:03.152069  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:14:03.187277  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:14:03.188768  503471 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/old-k8s-version-654890/id_rsa Username:docker}
	I0924 01:14:03.274295  503471 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:14:03.328294  503471 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-654890" to be "Ready" ...
	I0924 01:14:03.398167  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:14:03.452886  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:03.465793  503471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:14:03.465865  503471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:14:03.469256  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0924 01:14:03.469326  503471 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0924 01:14:03.564206  503471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:14:03.564283  503471 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:14:03.579359  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0924 01:14:03.579474  503471 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0924 01:14:03.658250  503471 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:14:03.658291  503471 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:14:03.699741  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0924 01:14:03.699769  503471 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0924 01:14:03.762700  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:14:03.816205  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0924 01:14:03.816240  503471 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0924 01:14:03.834694  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:03.834729  503471 retry.go:31] will retry after 367.570998ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:03.834771  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:03.834784  503471 retry.go:31] will retry after 175.936732ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:03.874898  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0924 01:14:03.874954  503471 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0924 01:14:03.904154  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0924 01:14:03.904180  503471 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0924 01:14:03.937557  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0924 01:14:03.937577  503471 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0924 01:14:03.983939  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0924 01:14:03.983972  503471 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0924 01:14:04.011255  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:04.045630  503471 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:14:04.045657  503471 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0924 01:14:04.057883  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.057917  503471 retry.go:31] will retry after 200.183082ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.104278  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:14:04.203461  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0924 01:14:04.206292  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.206328  503471 retry.go:31] will retry after 512.627283ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.258388  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:04.369002  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.369034  503471 retry.go:31] will retry after 178.378753ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:04.377633  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.377665  503471 retry.go:31] will retry after 254.640462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:04.439179  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.439210  503471 retry.go:31] will retry after 457.430386ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.548474  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:14:04.632920  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0924 01:14:04.649336  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.649380  503471 retry.go:31] will retry after 532.909782ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.719668  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0924 01:14:04.749312  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.749369  503471 retry.go:31] will retry after 696.192479ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:04.834603  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.834632  503471 retry.go:31] will retry after 436.646704ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.897903  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:04.994009  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:04.994088  503471 retry.go:31] will retry after 685.667663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.182533  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0924 01:14:05.261623  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.261659  503471 retry.go:31] will retry after 801.400659ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.271825  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:05.329387  503471 node_ready.go:53] error getting node "old-k8s-version-654890": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-654890": dial tcp 192.168.76.2:8443: connect: connection refused
	W0924 01:14:05.381331  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.381415  503471 retry.go:31] will retry after 1.163466849s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.446729  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0924 01:14:05.568128  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.568160  503471 retry.go:31] will retry after 864.970681ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.680416  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:05.798225  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:05.798308  503471 retry.go:31] will retry after 773.671596ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.064193  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0924 01:14:06.161894  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.161940  503471 retry.go:31] will retry after 768.779034ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.433938  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0924 01:14:06.538428  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.538465  503471 retry.go:31] will retry after 1.325643457s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.545754  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:06.573072  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:06.732737  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.732772  503471 retry.go:31] will retry after 941.091079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:06.732854  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.732878  503471 retry.go:31] will retry after 1.548834318s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:06.931653  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0924 01:14:07.032048  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:07.032083  503471 retry.go:31] will retry after 1.645231173s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:07.674499  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:07.756406  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:07.756453  503471 retry.go:31] will retry after 1.745097125s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:07.828908  503471 node_ready.go:53] error getting node "old-k8s-version-654890": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-654890": dial tcp 192.168.76.2:8443: connect: connection refused
	I0924 01:14:07.865140  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0924 01:14:07.944881  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:07.944916  503471 retry.go:31] will retry after 1.525652115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:08.282519  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0924 01:14:08.369628  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:08.369663  503471 retry.go:31] will retry after 1.389512767s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:08.678200  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0924 01:14:08.754754  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:08.754842  503471 retry.go:31] will retry after 2.036571362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:09.470784  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:14:09.502252  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:14:09.586530  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:09.586569  503471 retry.go:31] will retry after 2.225465399s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0924 01:14:09.644409  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:09.644439  503471 retry.go:31] will retry after 3.271834312s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:09.759822  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:09.829582  503471 node_ready.go:53] error getting node "old-k8s-version-654890": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-654890": dial tcp 192.168.76.2:8443: connect: connection refused
	W0924 01:14:09.838784  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:09.838864  503471 retry.go:31] will retry after 2.427502636s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:10.792132  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0924 01:14:10.880161  503471 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:10.880195  503471 retry.go:31] will retry after 2.289911778s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0924 01:14:11.812239  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:14:12.266773  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:14:12.917312  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:14:13.170900  503471 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:14:22.426121  503471 node_ready.go:49] node "old-k8s-version-654890" has status "Ready":"True"
	I0924 01:14:22.426145  503471 node_ready.go:38] duration metric: took 19.09780287s for node "old-k8s-version-654890" to be "Ready" ...
	I0924 01:14:22.426156  503471 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:14:22.577325  503471 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-4h5vb" in "kube-system" namespace to be "Ready" ...
	I0924 01:14:22.670256  503471 pod_ready.go:93] pod "coredns-74ff55c5b-4h5vb" in "kube-system" namespace has status "Ready":"True"
	I0924 01:14:22.670331  503471 pod_ready.go:82] duration metric: took 92.920444ms for pod "coredns-74ff55c5b-4h5vb" in "kube-system" namespace to be "Ready" ...
	I0924 01:14:22.670357  503471 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:14:23.563677  503471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.751399049s)
	I0924 01:14:23.563728  503471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (11.296932192s)
	I0924 01:14:23.610159  503471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.692804081s)
	I0924 01:14:23.610262  503471 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-654890"
	I0924 01:14:23.662110  503471 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.491152355s)
	I0924 01:14:23.664240  503471 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-654890 addons enable metrics-server
	
	I0924 01:14:23.666388  503471 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0924 01:14:23.668218  503471 addons.go:510] duration metric: took 20.674138465s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I0924 01:14:24.680062  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:27.177292  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:29.676819  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:31.677286  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:34.177119  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:36.178004  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:38.677002  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:41.177107  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:43.178117  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:45.214448  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:47.683752  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:50.177393  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:52.178392  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:54.178754  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:56.676381  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:14:58.676785  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:00.677645  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:03.177370  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:05.180516  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:07.675923  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:09.676736  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:12.177175  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:14.177211  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:16.182786  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:18.677465  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:20.677676  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:23.177277  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:25.178088  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:27.678473  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:30.178376  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:32.713650  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:35.177286  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:37.177700  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:39.675978  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:41.681287  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:43.177206  503471 pod_ready.go:93] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:43.177231  503471 pod_ready.go:82] duration metric: took 1m20.506853276s for pod "etcd-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.177248  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.182450  503471 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:43.182476  503471 pod_ready.go:82] duration metric: took 5.21957ms for pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.182507  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:45.191626  503471 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:47.190352  503471 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.190379  503471 pod_ready.go:82] duration metric: took 4.00786197s for pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.190392  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dctnp" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.195753  503471 pod_ready.go:93] pod "kube-proxy-dctnp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.195776  503471 pod_ready.go:82] duration metric: took 5.355496ms for pod "kube-proxy-dctnp" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.195787  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.201308  503471 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.201337  503471 pod_ready.go:82] duration metric: took 5.540809ms for pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.201351  503471 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:49.208658  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:51.707222  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:53.708077  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:56.209641  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:58.707466  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:00.708419  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:03.207876  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:05.208533  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:07.707911  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:09.707959  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:12.207936  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:14.708194  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:17.208113  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:19.208473  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:21.710594  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:24.208195  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:26.209405  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:28.713846  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:31.208503  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:33.707734  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:35.708337  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:38.207839  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:40.213566  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:42.709635  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:45.211860  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:47.707193  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:49.707944  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:51.710202  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:53.712325  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:56.212429  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:58.708373  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:01.208889  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:03.227933  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:05.709089  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:08.208394  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:10.208508  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:12.707726  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:15.208067  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:17.208790  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:19.209149  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:21.708671  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:24.216251  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:26.707770  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:29.208529  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:31.709364  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:33.761534  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:36.208565  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:38.707551  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:40.707768  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:42.707814  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:44.707848  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:46.708038  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:48.708177  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:51.207438  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:53.208301  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:55.707839  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:58.208516  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:00.235354  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:02.707695  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:05.208024  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:07.707243  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:09.707408  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:11.707886  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:14.207930  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:16.707304  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:19.209677  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:21.708310  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:24.207628  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:26.208190  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:28.707753  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:31.207884  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:33.208047  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:35.208663  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:37.707832  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:40.207918  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:42.209391  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:44.710598  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:47.207389  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:49.207968  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:51.208089  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:53.215727  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:55.707466  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:57.707846  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:00.226503  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:02.708170  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:05.207792  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:07.208012  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:09.208719  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:11.707807  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:13.707883  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:16.208372  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:18.208583  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:20.707950  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:22.711943  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:24.759385  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:27.207971  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:29.209588  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:31.709364  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:34.212724  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:36.708375  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:39.208776  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:41.708313  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:43.708560  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:46.207038  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:47.207950  503471 pod_ready.go:82] duration metric: took 4m0.006584819s for pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:19:47.207975  503471 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:19:47.207985  503471 pod_ready.go:39] duration metric: took 5m24.781817041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:19:47.207999  503471 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:19:47.208031  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:47.208103  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:47.248170  503471 cri.go:89] found id: "0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:47.248237  503471 cri.go:89] found id: "a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:47.248255  503471 cri.go:89] found id: ""
	I0924 01:19:47.248294  503471 logs.go:276] 2 containers: [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c]
	I0924 01:19:47.248373  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.252444  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.255922  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:47.256037  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:47.299255  503471 cri.go:89] found id: "1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:47.299279  503471 cri.go:89] found id: "4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:47.299284  503471 cri.go:89] found id: ""
	I0924 01:19:47.299291  503471 logs.go:276] 2 containers: [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2]
	I0924 01:19:47.299363  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.303065  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.307165  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:47.307239  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:47.344657  503471 cri.go:89] found id: "726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:47.344678  503471 cri.go:89] found id: "ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:47.344683  503471 cri.go:89] found id: ""
	I0924 01:19:47.344690  503471 logs.go:276] 2 containers: [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c]
	I0924 01:19:47.344774  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.348345  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.352584  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:47.352658  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:47.397310  503471 cri.go:89] found id: "11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:47.397331  503471 cri.go:89] found id: "92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:47.397336  503471 cri.go:89] found id: ""
	I0924 01:19:47.397343  503471 logs.go:276] 2 containers: [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3]
	I0924 01:19:47.397400  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.401198  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.404571  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:47.404647  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:47.450078  503471 cri.go:89] found id: "a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:47.450102  503471 cri.go:89] found id: "a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:47.450107  503471 cri.go:89] found id: ""
	I0924 01:19:47.450114  503471 logs.go:276] 2 containers: [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c]
	I0924 01:19:47.450195  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.454086  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.457899  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:47.457973  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:47.505274  503471 cri.go:89] found id: "14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:47.505344  503471 cri.go:89] found id: "840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:47.505363  503471 cri.go:89] found id: ""
	I0924 01:19:47.505390  503471 logs.go:276] 2 containers: [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55]
	I0924 01:19:47.505489  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.509368  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.513105  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:47.513224  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:47.552883  503471 cri.go:89] found id: "c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:47.552915  503471 cri.go:89] found id: "321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:47.552922  503471 cri.go:89] found id: ""
	I0924 01:19:47.552930  503471 logs.go:276] 2 containers: [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee]
	I0924 01:19:47.553023  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.556760  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.560250  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:47.560322  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:47.601478  503471 cri.go:89] found id: "ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:47.601521  503471 cri.go:89] found id: ""
	I0924 01:19:47.601530  503471 logs.go:276] 1 containers: [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63]
	I0924 01:19:47.601588  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.605414  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:47.605493  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:47.662058  503471 cri.go:89] found id: "fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:47.662083  503471 cri.go:89] found id: "fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:47.662088  503471 cri.go:89] found id: ""
	I0924 01:19:47.662096  503471 logs.go:276] 2 containers: [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d]
	I0924 01:19:47.662156  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.666136  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.669638  503471 logs.go:123] Gathering logs for coredns [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9] ...
	I0924 01:19:47.669676  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:47.710636  503471 logs.go:123] Gathering logs for kube-scheduler [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57] ...
	I0924 01:19:47.710669  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:47.753288  503471 logs.go:123] Gathering logs for container status ...
	I0924 01:19:47.753320  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:19:47.800841  503471 logs.go:123] Gathering logs for dmesg ...
	I0924 01:19:47.800872  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:19:47.817994  503471 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:19:47.818024  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:19:47.981580  503471 logs.go:123] Gathering logs for kube-apiserver [a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c] ...
	I0924 01:19:47.981616  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:48.052176  503471 logs.go:123] Gathering logs for containerd ...
	I0924 01:19:48.052216  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:19:48.118198  503471 logs.go:123] Gathering logs for kube-proxy [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d] ...
	I0924 01:19:48.118240  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:48.160574  503471 logs.go:123] Gathering logs for kubernetes-dashboard [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63] ...
	I0924 01:19:48.160607  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:48.205909  503471 logs.go:123] Gathering logs for storage-provisioner [fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d] ...
	I0924 01:19:48.205939  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:48.245163  503471 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:48.245190  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:48.298751  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431464     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-6n88c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6n88c" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299014  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431566     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299243  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431638     666 reflector.go:138] object-"kube-system"/"kindnet-token-jt6n9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jt6n9" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299478  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431704     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g5gtv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g5gtv" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299693  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431771     666 reflector.go:138] object-"default"/"default-token-2t7hj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2t7hj" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299915  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431832     666 reflector.go:138] object-"kube-system"/"metrics-server-token-dpjw8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dpjw8" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.300143  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433069     666 reflector.go:138] object-"kube-system"/"coredns-token-djfwt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-djfwt" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.300363  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433138     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.307885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:24 old-k8s-version-654890 kubelet[666]: E0924 01:14:24.244333     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.309456  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:25 old-k8s-version-654890 kubelet[666]: E0924 01:14:25.186030     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.312301  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:39 old-k8s-version-654890 kubelet[666]: E0924 01:14:39.793083     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.314771  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:51 old-k8s-version-654890 kubelet[666]: E0924 01:14:51.304991     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.315114  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:52 old-k8s-version-654890 kubelet[666]: E0924 01:14:52.318075     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.315302  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:53 old-k8s-version-654890 kubelet[666]: E0924 01:14:53.784377     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.315745  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:55 old-k8s-version-654890 kubelet[666]: E0924 01:14:55.344748     666 pod_workers.go:191] Error syncing pod c12ca6a0-fd9b-45bf-9da0-2ec1193cce32 ("storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"
	W0924 01:19:48.316678  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:01 old-k8s-version-654890 kubelet[666]: E0924 01:15:01.366255     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.319205  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:07 old-k8s-version-654890 kubelet[666]: E0924 01:15:07.792722     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.319672  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:11 old-k8s-version-654890 kubelet[666]: E0924 01:15:11.008383     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.319860  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:20 old-k8s-version-654890 kubelet[666]: E0924 01:15:20.784619     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.320481  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:26 old-k8s-version-654890 kubelet[666]: E0924 01:15:26.452520     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.320821  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:30 old-k8s-version-654890 kubelet[666]: E0924 01:15:30.986615     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.321011  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:33 old-k8s-version-654890 kubelet[666]: E0924 01:15:33.783850     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.321345  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.783545     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.321534  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.784908     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.321873  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.784164     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.324342  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.792225     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.324660  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:14 old-k8s-version-654890 kubelet[666]: E0924 01:16:14.789130     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.325123  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:15 old-k8s-version-654890 kubelet[666]: E0924 01:16:15.580475     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.325467  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:20 old-k8s-version-654890 kubelet[666]: E0924 01:16:20.992161     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.325652  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:29 old-k8s-version-654890 kubelet[666]: E0924 01:16:29.787709     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.326017  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:34 old-k8s-version-654890 kubelet[666]: E0924 01:16:34.784335     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.326235  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:42 old-k8s-version-654890 kubelet[666]: E0924 01:16:42.784427     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.326575  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:45 old-k8s-version-654890 kubelet[666]: E0924 01:16:45.783505     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.326768  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:53 old-k8s-version-654890 kubelet[666]: E0924 01:16:53.784108     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.327107  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:57 old-k8s-version-654890 kubelet[666]: E0924 01:16:57.783801     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.327294  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:06 old-k8s-version-654890 kubelet[666]: E0924 01:17:06.784121     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.327631  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:11 old-k8s-version-654890 kubelet[666]: E0924 01:17:11.783481     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.327817  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:18 old-k8s-version-654890 kubelet[666]: E0924 01:17:18.783911     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.328150  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:24 old-k8s-version-654890 kubelet[666]: E0924 01:17:24.784083     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.330620  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:31 old-k8s-version-654890 kubelet[666]: E0924 01:17:31.792392     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.331220  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:39 old-k8s-version-654890 kubelet[666]: E0924 01:17:39.856284     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.331553  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:40 old-k8s-version-654890 kubelet[666]: E0924 01:17:40.985923     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.331738  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:45 old-k8s-version-654890 kubelet[666]: E0924 01:17:45.784024     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.332067  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:55 old-k8s-version-654890 kubelet[666]: E0924 01:17:55.783900     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.332252  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:59 old-k8s-version-654890 kubelet[666]: E0924 01:17:59.784186     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.332585  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:06 old-k8s-version-654890 kubelet[666]: E0924 01:18:06.788365     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.332774  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:12 old-k8s-version-654890 kubelet[666]: E0924 01:18:12.783859     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.333106  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:18 old-k8s-version-654890 kubelet[666]: E0924 01:18:18.783560     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.333291  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:27 old-k8s-version-654890 kubelet[666]: E0924 01:18:27.783904     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.333627  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:30 old-k8s-version-654890 kubelet[666]: E0924 01:18:30.783887     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.333814  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:41 old-k8s-version-654890 kubelet[666]: E0924 01:18:41.784268     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.334149  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: E0924 01:18:44.783947     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.334334  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:55 old-k8s-version-654890 kubelet[666]: E0924 01:18:55.783966     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.334664  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: E0924 01:18:58.784580     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.334851  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:09 old-k8s-version-654890 kubelet[666]: E0924 01:19:09.783890     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.335185  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: E0924 01:19:11.783671     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.335370  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.335699  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.336028  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.336213  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.336399  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0924 01:19:48.336409  503471 logs.go:123] Gathering logs for kube-controller-manager [840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55] ...
	I0924 01:19:48.336425  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:48.394895  503471 logs.go:123] Gathering logs for kindnet [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335] ...
	I0924 01:19:48.394935  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:48.445555  503471 logs.go:123] Gathering logs for coredns [ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c] ...
	I0924 01:19:48.445586  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:48.484819  503471 logs.go:123] Gathering logs for kube-scheduler [92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3] ...
	I0924 01:19:48.484886  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:48.531995  503471 logs.go:123] Gathering logs for kube-proxy [a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c] ...
	I0924 01:19:48.532082  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:48.573118  503471 logs.go:123] Gathering logs for kube-controller-manager [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b] ...
	I0924 01:19:48.573189  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:48.633525  503471 logs.go:123] Gathering logs for kindnet [321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee] ...
	I0924 01:19:48.633563  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:48.680372  503471 logs.go:123] Gathering logs for kube-apiserver [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f] ...
	I0924 01:19:48.680403  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:48.742350  503471 logs.go:123] Gathering logs for etcd [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e] ...
	I0924 01:19:48.742384  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:48.797001  503471 logs.go:123] Gathering logs for etcd [4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2] ...
	I0924 01:19:48.797035  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:48.847657  503471 logs.go:123] Gathering logs for storage-provisioner [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2] ...
	I0924 01:19:48.847687  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:48.891111  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:48.891138  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:19:48.891192  503471 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0924 01:19:48.891209  503471 out.go:270]   Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.891225  503471 out.go:270]   Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	  Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.891256  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	  Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.891265  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.891275  503471 out.go:270]   Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0924 01:19:48.891281  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:48.891290  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:19:58.892677  503471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:19:58.904580  503471 api_server.go:72] duration metric: took 5m55.910809038s to wait for apiserver process to appear ...
	I0924 01:19:58.904607  503471 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:19:58.904644  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:58.904701  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:58.944094  503471 cri.go:89] found id: "0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:58.944123  503471 cri.go:89] found id: "a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:58.944129  503471 cri.go:89] found id: ""
	I0924 01:19:58.944140  503471 logs.go:276] 2 containers: [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c]
	I0924 01:19:58.944210  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.948097  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.952102  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:58.952189  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:58.990627  503471 cri.go:89] found id: "1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:58.990651  503471 cri.go:89] found id: "4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:58.990657  503471 cri.go:89] found id: ""
	I0924 01:19:58.990664  503471 logs.go:276] 2 containers: [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2]
	I0924 01:19:58.990744  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.994962  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.998358  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:58.998428  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:59.039355  503471 cri.go:89] found id: "726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:59.039379  503471 cri.go:89] found id: "ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:59.039384  503471 cri.go:89] found id: ""
	I0924 01:19:59.039391  503471 logs.go:276] 2 containers: [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c]
	I0924 01:19:59.039451  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.043628  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.047352  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:59.047432  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:59.088932  503471 cri.go:89] found id: "11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:59.088957  503471 cri.go:89] found id: "92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:59.088963  503471 cri.go:89] found id: ""
	I0924 01:19:59.088970  503471 logs.go:276] 2 containers: [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3]
	I0924 01:19:59.089029  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.093313  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.096780  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:59.096850  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:59.137471  503471 cri.go:89] found id: "a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:59.137492  503471 cri.go:89] found id: "a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:59.137497  503471 cri.go:89] found id: ""
	I0924 01:19:59.137505  503471 logs.go:276] 2 containers: [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c]
	I0924 01:19:59.137584  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.141423  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.144785  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:59.144903  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:59.183946  503471 cri.go:89] found id: "14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:59.183968  503471 cri.go:89] found id: "840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:59.183973  503471 cri.go:89] found id: ""
	I0924 01:19:59.183980  503471 logs.go:276] 2 containers: [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55]
	I0924 01:19:59.184038  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.187604  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.191086  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:59.191163  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:59.233371  503471 cri.go:89] found id: "c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:59.233394  503471 cri.go:89] found id: "321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:59.233399  503471 cri.go:89] found id: ""
	I0924 01:19:59.233407  503471 logs.go:276] 2 containers: [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee]
	I0924 01:19:59.233487  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.237332  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.241220  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:59.241332  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:59.284694  503471 cri.go:89] found id: "fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:59.284770  503471 cri.go:89] found id: "fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:59.284783  503471 cri.go:89] found id: ""
	I0924 01:19:59.284791  503471 logs.go:276] 2 containers: [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d]
	I0924 01:19:59.284904  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.288841  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.292850  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:59.292961  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:59.342825  503471 cri.go:89] found id: "ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:59.342879  503471 cri.go:89] found id: ""
	I0924 01:19:59.342902  503471 logs.go:276] 1 containers: [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63]
	I0924 01:19:59.343028  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.346892  503471 logs.go:123] Gathering logs for kindnet [321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee] ...
	I0924 01:19:59.346956  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:59.390245  503471 logs.go:123] Gathering logs for storage-provisioner [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2] ...
	I0924 01:19:59.390298  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:59.430145  503471 logs.go:123] Gathering logs for coredns [ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c] ...
	I0924 01:19:59.430171  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:59.477526  503471 logs.go:123] Gathering logs for kube-scheduler [92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3] ...
	I0924 01:19:59.477553  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:59.522254  503471 logs.go:123] Gathering logs for kube-proxy [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d] ...
	I0924 01:19:59.522285  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:59.578762  503471 logs.go:123] Gathering logs for etcd [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e] ...
	I0924 01:19:59.578860  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:59.621417  503471 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:59.621447  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:59.677632  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431464     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-6n88c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6n88c" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.677885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431566     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678107  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431638     666 reflector.go:138] object-"kube-system"/"kindnet-token-jt6n9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jt6n9" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678337  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431704     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g5gtv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g5gtv" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678549  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431771     666 reflector.go:138] object-"default"/"default-token-2t7hj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2t7hj" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678769  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431832     666 reflector.go:138] object-"kube-system"/"metrics-server-token-dpjw8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dpjw8" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.679001  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433069     666 reflector.go:138] object-"kube-system"/"coredns-token-djfwt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-djfwt" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.679205  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433138     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.686595  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:24 old-k8s-version-654890 kubelet[666]: E0924 01:14:24.244333     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.688182  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:25 old-k8s-version-654890 kubelet[666]: E0924 01:14:25.186030     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.690962  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:39 old-k8s-version-654890 kubelet[666]: E0924 01:14:39.793083     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.693368  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:51 old-k8s-version-654890 kubelet[666]: E0924 01:14:51.304991     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.693697  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:52 old-k8s-version-654890 kubelet[666]: E0924 01:14:52.318075     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.693885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:53 old-k8s-version-654890 kubelet[666]: E0924 01:14:53.784377     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.694358  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:55 old-k8s-version-654890 kubelet[666]: E0924 01:14:55.344748     666 pod_workers.go:191] Error syncing pod c12ca6a0-fd9b-45bf-9da0-2ec1193cce32 ("storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"
	W0924 01:19:59.695337  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:01 old-k8s-version-654890 kubelet[666]: E0924 01:15:01.366255     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.697828  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:07 old-k8s-version-654890 kubelet[666]: E0924 01:15:07.792722     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.698297  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:11 old-k8s-version-654890 kubelet[666]: E0924 01:15:11.008383     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.698482  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:20 old-k8s-version-654890 kubelet[666]: E0924 01:15:20.784619     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.699079  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:26 old-k8s-version-654890 kubelet[666]: E0924 01:15:26.452520     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.699407  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:30 old-k8s-version-654890 kubelet[666]: E0924 01:15:30.986615     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.699593  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:33 old-k8s-version-654890 kubelet[666]: E0924 01:15:33.783850     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.699929  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.783545     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.700115  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.784908     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.700443  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.784164     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.702897  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.792225     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.703245  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:14 old-k8s-version-654890 kubelet[666]: E0924 01:16:14.789130     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.703708  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:15 old-k8s-version-654890 kubelet[666]: E0924 01:16:15.580475     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704038  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:20 old-k8s-version-654890 kubelet[666]: E0924 01:16:20.992161     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704222  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:29 old-k8s-version-654890 kubelet[666]: E0924 01:16:29.787709     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.704549  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:34 old-k8s-version-654890 kubelet[666]: E0924 01:16:34.784335     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704734  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:42 old-k8s-version-654890 kubelet[666]: E0924 01:16:42.784427     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.705065  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:45 old-k8s-version-654890 kubelet[666]: E0924 01:16:45.783505     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.705249  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:53 old-k8s-version-654890 kubelet[666]: E0924 01:16:53.784108     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.705575  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:57 old-k8s-version-654890 kubelet[666]: E0924 01:16:57.783801     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.705759  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:06 old-k8s-version-654890 kubelet[666]: E0924 01:17:06.784121     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.706092  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:11 old-k8s-version-654890 kubelet[666]: E0924 01:17:11.783481     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.706279  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:18 old-k8s-version-654890 kubelet[666]: E0924 01:17:18.783911     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.706609  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:24 old-k8s-version-654890 kubelet[666]: E0924 01:17:24.784083     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.709045  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:31 old-k8s-version-654890 kubelet[666]: E0924 01:17:31.792392     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.709642  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:39 old-k8s-version-654890 kubelet[666]: E0924 01:17:39.856284     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.709971  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:40 old-k8s-version-654890 kubelet[666]: E0924 01:17:40.985923     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.710166  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:45 old-k8s-version-654890 kubelet[666]: E0924 01:17:45.784024     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.710493  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:55 old-k8s-version-654890 kubelet[666]: E0924 01:17:55.783900     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.710677  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:59 old-k8s-version-654890 kubelet[666]: E0924 01:17:59.784186     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.711009  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:06 old-k8s-version-654890 kubelet[666]: E0924 01:18:06.788365     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.711196  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:12 old-k8s-version-654890 kubelet[666]: E0924 01:18:12.783859     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.711569  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:18 old-k8s-version-654890 kubelet[666]: E0924 01:18:18.783560     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.711755  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:27 old-k8s-version-654890 kubelet[666]: E0924 01:18:27.783904     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.712084  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:30 old-k8s-version-654890 kubelet[666]: E0924 01:18:30.783887     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.712268  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:41 old-k8s-version-654890 kubelet[666]: E0924 01:18:41.784268     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.712597  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: E0924 01:18:44.783947     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.712784  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:55 old-k8s-version-654890 kubelet[666]: E0924 01:18:55.783966     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.713114  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: E0924 01:18:58.784580     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.713298  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:09 old-k8s-version-654890 kubelet[666]: E0924 01:19:09.783890     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.713628  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: E0924 01:19:11.783671     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.713812  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.714144  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.714487  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.714673  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.714857  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.715195  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	I0924 01:19:59.715208  503471 logs.go:123] Gathering logs for kube-apiserver [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f] ...
	I0924 01:19:59.715224  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:59.775878  503471 logs.go:123] Gathering logs for kube-apiserver [a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c] ...
	I0924 01:19:59.775913  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:59.848947  503471 logs.go:123] Gathering logs for coredns [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9] ...
	I0924 01:19:59.848982  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:59.893787  503471 logs.go:123] Gathering logs for kube-scheduler [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57] ...
	I0924 01:19:59.893817  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:59.934822  503471 logs.go:123] Gathering logs for kube-proxy [a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c] ...
	I0924 01:19:59.934854  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:59.975700  503471 logs.go:123] Gathering logs for kube-controller-manager [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b] ...
	I0924 01:19:59.975727  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:20:00.150686  503471 logs.go:123] Gathering logs for containerd ...
	I0924 01:20:00.150774  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:20:00.348822  503471 logs.go:123] Gathering logs for dmesg ...
	I0924 01:20:00.348953  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:20:00.421331  503471 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:20:00.421432  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:20:00.860083  503471 logs.go:123] Gathering logs for etcd [4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2] ...
	I0924 01:20:00.860117  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:20:00.933708  503471 logs.go:123] Gathering logs for kubernetes-dashboard [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63] ...
	I0924 01:20:00.933741  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:20:00.983639  503471 logs.go:123] Gathering logs for container status ...
	I0924 01:20:00.983672  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:20:01.031385  503471 logs.go:123] Gathering logs for kube-controller-manager [840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55] ...
	I0924 01:20:01.031544  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:20:01.119721  503471 logs.go:123] Gathering logs for kindnet [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335] ...
	I0924 01:20:01.119762  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:20:01.189736  503471 logs.go:123] Gathering logs for storage-provisioner [fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d] ...
	I0924 01:20:01.189781  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:20:01.236887  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:20:01.236929  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:20:01.236990  503471 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0924 01:20:01.237011  503471 out.go:270]   Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	  Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:20:01.237022  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	  Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:20:01.237031  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:20:01.237044  503471 out.go:270]   Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:20:01.237050  503471 out.go:270]   Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	  Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	I0924 01:20:01.237057  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:20:01.237068  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:20:11.238141  503471 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0924 01:20:11.249366  503471 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0924 01:20:11.251766  503471 out.go:201] 
	W0924 01:20:11.253416  503471 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0924 01:20:11.253457  503471 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0924 01:20:11.253478  503471 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0924 01:20:11.253487  503471 out.go:270] * 
	* 
	W0924 01:20:11.254517  503471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:20:11.256620  503471 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-654890 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-654890
helpers_test.go:235: (dbg) docker inspect old-k8s-version-654890:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5",
	        "Created": "2024-09-24T01:10:36.906966417Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503665,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-24T01:13:54.628022762Z",
	            "FinishedAt": "2024-09-24T01:13:53.610212588Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5/hostname",
	        "HostsPath": "/var/lib/docker/containers/da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5/hosts",
	        "LogPath": "/var/lib/docker/containers/da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5/da622e672fcf5cf588c49527db1e3748578c64f06c54c05957df1fb2a8c5aee5-json.log",
	        "Name": "/old-k8s-version-654890",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-654890:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-654890",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/20df8a8b727b9375e61e46d086c438e7862ee0d73e487486c0c1f794393c3059-init/diff:/var/lib/docker/overlay2/7ad1ac86d8d84caef983ee398d28a66996d884096876cd745ca39b66abf10752/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20df8a8b727b9375e61e46d086c438e7862ee0d73e487486c0c1f794393c3059/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20df8a8b727b9375e61e46d086c438e7862ee0d73e487486c0c1f794393c3059/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20df8a8b727b9375e61e46d086c438e7862ee0d73e487486c0c1f794393c3059/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-654890",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-654890/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-654890",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-654890",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-654890",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e05c30016d00e3d42c9b9d7b22fa78cf703a8009d523f5f936437bbed520dcb0",
	            "SandboxKey": "/var/run/docker/netns/e05c30016d00",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-654890": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "63ec0b812ffc8f6f22c1ba958f67fb90363268d030eeb4421b582536774bee5a",
	                    "EndpointID": "e7930192aacfde574dcf8e6ee8d40306a8904afb51403138a46d67eca5fe98f2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-654890",
	                        "da622e672fcf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-654890 -n old-k8s-version-654890
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-654890 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-654890 logs -n 25: (2.889601066s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-005476                                        | pause-005476           | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:09 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| unpause | -p pause-005476                                        | pause-005476           | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:09 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| pause   | -p pause-005476                                        | pause-005476           | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:09 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| delete  | -p pause-005476                                        | pause-005476           | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:09 UTC |
	|         | --alsologtostderr -v=5                                 |                        |         |         |                     |                     |
	| delete  | -p pause-005476                                        | pause-005476           | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:09 UTC |
	| start   | -p cert-options-649069                                 | cert-options-649069    | jenkins | v1.34.0 | 24 Sep 24 01:09 UTC | 24 Sep 24 01:10 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                        |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                        |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                        |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                        |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| ssh     | cert-options-649069 ssh                                | cert-options-649069    | jenkins | v1.34.0 | 24 Sep 24 01:10 UTC | 24 Sep 24 01:10 UTC |
	|         | openssl x509 -text -noout -in                          |                        |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                        |         |         |                     |                     |
	| ssh     | -p cert-options-649069 -- sudo                         | cert-options-649069    | jenkins | v1.34.0 | 24 Sep 24 01:10 UTC | 24 Sep 24 01:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                        |         |         |                     |                     |
	| delete  | -p cert-options-649069                                 | cert-options-649069    | jenkins | v1.34.0 | 24 Sep 24 01:10 UTC | 24 Sep 24 01:10 UTC |
	| start   | -p old-k8s-version-654890                              | old-k8s-version-654890 | jenkins | v1.34.0 | 24 Sep 24 01:10 UTC | 24 Sep 24 01:13 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| start   | -p cert-expiration-136100                              | cert-expiration-136100 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:13 UTC |
	|         | --memory=2048                                          |                        |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	| delete  | -p cert-expiration-136100                              | cert-expiration-136100 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:13 UTC |
	| start   | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:14 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-654890        | old-k8s-version-654890 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-654890                              | old-k8s-version-654890 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:13 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-654890             | old-k8s-version-654890 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC | 24 Sep 24 01:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-654890                              | old-k8s-version-654890 | jenkins | v1.34.0 | 24 Sep 24 01:13 UTC |                     |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                        |         |         |                     |                     |
	|         | --kvm-network=default                                  |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                        |         |         |                     |                     |
	|         | --keep-context=false                                   |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-558135             | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:14 UTC | 24 Sep 24 01:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                        |         |         |                     |                     |
	| stop    | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:14 UTC | 24 Sep 24 01:15 UTC |
	|         | --alsologtostderr -v=3                                 |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-558135                  | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:15 UTC | 24 Sep 24 01:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                        |         |         |                     |                     |
	| start   | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:15 UTC | 24 Sep 24 01:19 UTC |
	|         | --memory=2200                                          |                        |         |         |                     |                     |
	|         | --alsologtostderr                                      |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                        |         |         |                     |                     |
	|         | --driver=docker                                        |                        |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                        |         |         |                     |                     |
	| image   | no-preload-558135 image list                           | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:20 UTC | 24 Sep 24 01:20 UTC |
	|         | --format=json                                          |                        |         |         |                     |                     |
	| pause   | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:20 UTC | 24 Sep 24 01:20 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| unpause | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:20 UTC | 24 Sep 24 01:20 UTC |
	|         | --alsologtostderr -v=1                                 |                        |         |         |                     |                     |
	| delete  | -p no-preload-558135                                   | no-preload-558135      | jenkins | v1.34.0 | 24 Sep 24 01:20 UTC |                     |
	|---------|--------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 01:15:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 01:15:05.131324  508400 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:15:05.131506  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:15:05.131561  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:15:05.131584  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:15:05.131888  508400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 01:15:05.132306  508400 out.go:352] Setting JSON to false
	I0924 01:15:05.133461  508400 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10651,"bootTime":1727129855,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 01:15:05.133577  508400 start.go:139] virtualization:  
	I0924 01:15:05.135985  508400 out.go:177] * [no-preload-558135] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 01:15:05.139120  508400 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:15:05.139217  508400 notify.go:220] Checking for updates...
	I0924 01:15:05.142883  508400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:15:05.144600  508400 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:15:05.146569  508400 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 01:15:05.148223  508400 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 01:15:05.150213  508400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:15:05.152393  508400 config.go:182] Loaded profile config "no-preload-558135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 01:15:05.152985  508400 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:15:05.192354  508400 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 01:15:05.192491  508400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 01:15:05.253848  508400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 01:15:05.242858864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 01:15:05.253962  508400 docker.go:318] overlay module found
	I0924 01:15:05.256550  508400 out.go:177] * Using the docker driver based on existing profile
	I0924 01:15:05.258872  508400 start.go:297] selected driver: docker
	I0924 01:15:05.258891  508400 start.go:901] validating driver "docker" against &{Name:no-preload-558135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558135 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:15:05.259076  508400 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:15:05.259794  508400 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 01:15:05.308138  508400 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 01:15:05.299019674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 01:15:05.308517  508400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:15:05.308548  508400 cni.go:84] Creating CNI manager for ""
	I0924 01:15:05.308590  508400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 01:15:05.308646  508400 start.go:340] cluster config:
	{Name:no-preload-558135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:15:05.311569  508400 out.go:177] * Starting "no-preload-558135" primary control-plane node in "no-preload-558135" cluster
	I0924 01:15:05.313629  508400 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 01:15:05.315880  508400 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0924 01:15:05.317647  508400 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 01:15:05.317742  508400 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 01:15:05.317840  508400 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/config.json ...
	I0924 01:15:05.318231  508400 cache.go:107] acquiring lock: {Name:mk398fc6f821486820763f92f690857ee3a862a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318342  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0924 01:15:05.318354  508400 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 142.236µs
	I0924 01:15:05.318368  508400 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0924 01:15:05.318386  508400 cache.go:107] acquiring lock: {Name:mk564505433d8160c6cf8d9034c077c3eb692643 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318399  508400 cache.go:107] acquiring lock: {Name:mk9958e93ea76798dc57faa97724f0d182438d07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318428  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I0924 01:15:05.318434  508400 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 49.412µs
	I0924 01:15:05.318440  508400 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I0924 01:15:05.318461  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I0924 01:15:05.318453  508400 cache.go:107] acquiring lock: {Name:mk8cd370c169857c73fddb24a05ce8c68e33a758 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318470  508400 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 79.393µs
	I0924 01:15:05.318478  508400 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I0924 01:15:05.318490  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I0924 01:15:05.318489  508400 cache.go:107] acquiring lock: {Name:mkec047b96edc009c73a2992a5921497c2568ff3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318499  508400 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 46.81µs
	I0924 01:15:05.318506  508400 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I0924 01:15:05.318514  508400 cache.go:107] acquiring lock: {Name:mk59eeb3e7494fb41bb2b8f96dd347829135888a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318534  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I0924 01:15:05.318540  508400 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 53.022µs
	I0924 01:15:05.318546  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I0924 01:15:05.318546  508400 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I0924 01:15:05.318552  508400 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 38.31µs
	I0924 01:15:05.318560  508400 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I0924 01:15:05.318561  508400 cache.go:107] acquiring lock: {Name:mkeec81a65d2cbb29f6fa6cf5445016cf9d93eeb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318579  508400 cache.go:107] acquiring lock: {Name:mk0fe11db21e83645ea1593e52f6cd6eba0dcf3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.318592  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I0924 01:15:05.318598  508400 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 38.901µs
	I0924 01:15:05.318604  508400 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I0924 01:15:05.318649  508400 cache.go:115] /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I0924 01:15:05.318660  508400 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 85.112µs
	I0924 01:15:05.318668  508400 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I0924 01:15:05.318678  508400 cache.go:87] Successfully saved all images to host disk.
	I0924 01:15:05.337850  508400 image.go:98] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon, skipping pull
	I0924 01:15:05.337874  508400 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in daemon, skipping load
	I0924 01:15:05.337895  508400 cache.go:194] Successfully downloaded all kic artifacts
	I0924 01:15:05.337921  508400 start.go:360] acquireMachinesLock for no-preload-558135: {Name:mkc2977897c44e467ec83444dc746b792fae07d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0924 01:15:05.337981  508400 start.go:364] duration metric: took 39.31µs to acquireMachinesLock for "no-preload-558135"
	I0924 01:15:05.338006  508400 start.go:96] Skipping create...Using existing machine configuration
	I0924 01:15:05.338015  508400 fix.go:54] fixHost starting: 
	I0924 01:15:05.338285  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:05.358943  508400 fix.go:112] recreateIfNeeded on no-preload-558135: state=Stopped err=<nil>
	W0924 01:15:05.358983  508400 fix.go:138] unexpected machine state, will restart: <nil>
	I0924 01:15:05.361525  508400 out.go:177] * Restarting existing docker container for "no-preload-558135" ...
	I0924 01:15:05.180516  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:07.675923  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:05.363889  508400 cli_runner.go:164] Run: docker start no-preload-558135
	I0924 01:15:05.736516  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:05.757501  508400 kic.go:430] container "no-preload-558135" state is running.
	I0924 01:15:05.757890  508400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-558135
	I0924 01:15:05.778260  508400 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/config.json ...
	I0924 01:15:05.778488  508400 machine.go:93] provisionDockerMachine start ...
	I0924 01:15:05.778554  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:05.802347  508400 main.go:141] libmachine: Using SSH client type: native
	I0924 01:15:05.802607  508400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0924 01:15:05.802617  508400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0924 01:15:05.804594  508400 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0924 01:15:08.938826  508400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-558135
	
	I0924 01:15:08.938852  508400 ubuntu.go:169] provisioning hostname "no-preload-558135"
	I0924 01:15:08.938996  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:08.958136  508400 main.go:141] libmachine: Using SSH client type: native
	I0924 01:15:08.958387  508400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0924 01:15:08.958405  508400 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-558135 && echo "no-preload-558135" | sudo tee /etc/hostname
	I0924 01:15:09.105960  508400 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-558135
	
	I0924 01:15:09.106090  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:09.128028  508400 main.go:141] libmachine: Using SSH client type: native
	I0924 01:15:09.128267  508400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I0924 01:15:09.128291  508400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-558135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-558135/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-558135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0924 01:15:09.266951  508400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0924 01:15:09.266978  508400 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19696-296322/.minikube CaCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19696-296322/.minikube}
	I0924 01:15:09.267017  508400 ubuntu.go:177] setting up certificates
	I0924 01:15:09.267033  508400 provision.go:84] configureAuth start
	I0924 01:15:09.267099  508400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-558135
	I0924 01:15:09.284152  508400 provision.go:143] copyHostCerts
	I0924 01:15:09.284227  508400 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem, removing ...
	I0924 01:15:09.284238  508400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem
	I0924 01:15:09.284316  508400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/ca.pem (1078 bytes)
	I0924 01:15:09.284432  508400 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem, removing ...
	I0924 01:15:09.284445  508400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem
	I0924 01:15:09.284474  508400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/cert.pem (1123 bytes)
	I0924 01:15:09.284556  508400 exec_runner.go:144] found /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem, removing ...
	I0924 01:15:09.284565  508400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem
	I0924 01:15:09.284591  508400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19696-296322/.minikube/key.pem (1675 bytes)
	I0924 01:15:09.284734  508400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem org=jenkins.no-preload-558135 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-558135]
	I0924 01:15:10.018175  508400 provision.go:177] copyRemoteCerts
	I0924 01:15:10.019485  508400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0924 01:15:10.019602  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:10.041738  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:10.140907  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0924 01:15:10.178786  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0924 01:15:10.212058  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0924 01:15:10.237740  508400 provision.go:87] duration metric: took 970.69088ms to configureAuth
	I0924 01:15:10.237766  508400 ubuntu.go:193] setting minikube options for container-runtime
	I0924 01:15:10.238000  508400 config.go:182] Loaded profile config "no-preload-558135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 01:15:10.238015  508400 machine.go:96] duration metric: took 4.459517521s to provisionDockerMachine
	I0924 01:15:10.238025  508400 start.go:293] postStartSetup for "no-preload-558135" (driver="docker")
	I0924 01:15:10.238041  508400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0924 01:15:10.238102  508400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0924 01:15:10.238151  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:10.255479  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:10.352314  508400 ssh_runner.go:195] Run: cat /etc/os-release
	I0924 01:15:10.355654  508400 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0924 01:15:10.355690  508400 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0924 01:15:10.355701  508400 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0924 01:15:10.355708  508400 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0924 01:15:10.355719  508400 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/addons for local assets ...
	I0924 01:15:10.355786  508400 filesync.go:126] Scanning /home/jenkins/minikube-integration/19696-296322/.minikube/files for local assets ...
	I0924 01:15:10.355887  508400 filesync.go:149] local asset: /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem -> 3017112.pem in /etc/ssl/certs
	I0924 01:15:10.355996  508400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0924 01:15:10.365118  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem --> /etc/ssl/certs/3017112.pem (1708 bytes)
	I0924 01:15:10.389873  508400 start.go:296] duration metric: took 151.826309ms for postStartSetup
	I0924 01:15:10.389975  508400 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 01:15:10.390018  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:10.409667  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:10.500566  508400 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0924 01:15:10.505315  508400 fix.go:56] duration metric: took 5.167290636s for fixHost
	I0924 01:15:10.505341  508400 start.go:83] releasing machines lock for "no-preload-558135", held for 5.167346874s
	I0924 01:15:10.505424  508400 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-558135
	I0924 01:15:10.521705  508400 ssh_runner.go:195] Run: cat /version.json
	I0924 01:15:10.521764  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:10.521997  508400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0924 01:15:10.522062  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:10.540019  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:10.552549  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:10.790515  508400 ssh_runner.go:195] Run: systemctl --version
	I0924 01:15:10.796651  508400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0924 01:15:10.803677  508400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0924 01:15:10.844599  508400 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0924 01:15:10.844847  508400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0924 01:15:10.861488  508400 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0924 01:15:10.861600  508400 start.go:495] detecting cgroup driver to use...
	I0924 01:15:10.861696  508400 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0924 01:15:10.861795  508400 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0924 01:15:10.887174  508400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0924 01:15:10.907361  508400 docker.go:217] disabling cri-docker service (if available) ...
	I0924 01:15:10.907561  508400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0924 01:15:10.929651  508400 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0924 01:15:10.947799  508400 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0924 01:15:11.059808  508400 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0924 01:15:11.162718  508400 docker.go:233] disabling docker service ...
	I0924 01:15:11.162800  508400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0924 01:15:11.181938  508400 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0924 01:15:11.195555  508400 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0924 01:15:11.287672  508400 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0924 01:15:11.393952  508400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0924 01:15:11.408889  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0924 01:15:11.427716  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0924 01:15:11.442786  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0924 01:15:11.455172  508400 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0924 01:15:11.455251  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0924 01:15:11.467063  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 01:15:11.477798  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0924 01:15:11.489050  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0924 01:15:11.500803  508400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0924 01:15:11.510659  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0924 01:15:11.521564  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0924 01:15:11.532878  508400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0924 01:15:11.543600  508400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0924 01:15:11.554407  508400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0924 01:15:11.563320  508400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:15:11.654842  508400 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0924 01:15:11.823903  508400 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0924 01:15:11.824023  508400 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0924 01:15:11.827847  508400 start.go:563] Will wait 60s for crictl version
	I0924 01:15:11.827952  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:15:11.832381  508400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0924 01:15:11.874255  508400 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0924 01:15:11.874357  508400 ssh_runner.go:195] Run: containerd --version
	I0924 01:15:11.897928  508400 ssh_runner.go:195] Run: containerd --version
	I0924 01:15:11.928649  508400 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0924 01:15:11.930268  508400 cli_runner.go:164] Run: docker network inspect no-preload-558135 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0924 01:15:11.946880  508400 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0924 01:15:11.950892  508400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:15:11.962490  508400 kubeadm.go:883] updating cluster {Name:no-preload-558135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenk
ins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0924 01:15:11.962611  508400 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 01:15:11.962665  508400 ssh_runner.go:195] Run: sudo crictl images --output json
	I0924 01:15:12.017958  508400 containerd.go:627] all images are preloaded for containerd runtime.
	I0924 01:15:12.017983  508400 cache_images.go:84] Images are preloaded, skipping loading
	I0924 01:15:12.017992  508400 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0924 01:15:12.018118  508400 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-558135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:no-preload-558135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0924 01:15:12.018196  508400 ssh_runner.go:195] Run: sudo crictl info
	I0924 01:15:12.062391  508400 cni.go:84] Creating CNI manager for ""
	I0924 01:15:12.062420  508400 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 01:15:12.062432  508400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0924 01:15:12.062457  508400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-558135 NodeName:no-preload-558135 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0924 01:15:12.062592  508400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-558135"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0924 01:15:12.062669  508400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0924 01:15:12.073705  508400 binaries.go:44] Found k8s binaries, skipping transfer
	I0924 01:15:12.073812  508400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0924 01:15:12.083842  508400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0924 01:15:12.105532  508400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0924 01:15:12.126359  508400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2171 bytes)
	I0924 01:15:12.146476  508400 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0924 01:15:12.150008  508400 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0924 01:15:12.161522  508400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:15:12.254881  508400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:15:12.270877  508400 certs.go:68] Setting up /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135 for IP: 192.168.85.2
	I0924 01:15:12.271004  508400 certs.go:194] generating shared ca certs ...
	I0924 01:15:12.271035  508400 certs.go:226] acquiring lock for ca certs: {Name:mk4a6ab65221805436b06c42ec4fde316fe470ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:15:12.271211  508400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key
	I0924 01:15:12.271285  508400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key
	I0924 01:15:12.271317  508400 certs.go:256] generating profile certs ...
	I0924 01:15:12.271450  508400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.key
	I0924 01:15:12.271558  508400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/apiserver.key.75068203
	I0924 01:15:12.271647  508400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/proxy-client.key
	I0924 01:15:12.271809  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711.pem (1338 bytes)
	W0924 01:15:12.271862  508400 certs.go:480] ignoring /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711_empty.pem, impossibly tiny 0 bytes
	I0924 01:15:12.271894  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca-key.pem (1675 bytes)
	I0924 01:15:12.271935  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/ca.pem (1078 bytes)
	I0924 01:15:12.271993  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/cert.pem (1123 bytes)
	I0924 01:15:12.272054  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/certs/key.pem (1675 bytes)
	I0924 01:15:12.272128  508400 certs.go:484] found cert: /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem (1708 bytes)
	I0924 01:15:12.272933  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0924 01:15:12.298049  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0924 01:15:12.323028  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0924 01:15:12.348775  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0924 01:15:12.374146  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0924 01:15:12.404654  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0924 01:15:12.436704  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0924 01:15:12.491604  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0924 01:15:12.548152  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0924 01:15:12.577121  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/certs/301711.pem --> /usr/share/ca-certificates/301711.pem (1338 bytes)
	I0924 01:15:12.602655  508400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/ssl/certs/3017112.pem --> /usr/share/ca-certificates/3017112.pem (1708 bytes)
	I0924 01:15:12.649693  508400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0924 01:15:12.668885  508400 ssh_runner.go:195] Run: openssl version
	I0924 01:15:12.677035  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0924 01:15:12.688416  508400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:15:12.692149  508400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 24 00:24 /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:15:12.692236  508400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0924 01:15:12.699183  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0924 01:15:12.708539  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/301711.pem && ln -fs /usr/share/ca-certificates/301711.pem /etc/ssl/certs/301711.pem"
	I0924 01:15:12.720284  508400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/301711.pem
	I0924 01:15:12.724531  508400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 24 00:35 /usr/share/ca-certificates/301711.pem
	I0924 01:15:12.724630  508400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/301711.pem
	I0924 01:15:12.732015  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/301711.pem /etc/ssl/certs/51391683.0"
	I0924 01:15:12.741377  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3017112.pem && ln -fs /usr/share/ca-certificates/3017112.pem /etc/ssl/certs/3017112.pem"
	I0924 01:15:12.751653  508400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3017112.pem
	I0924 01:15:12.755496  508400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 24 00:35 /usr/share/ca-certificates/3017112.pem
	I0924 01:15:12.755622  508400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3017112.pem
	I0924 01:15:12.762797  508400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3017112.pem /etc/ssl/certs/3ec20f2e.0"
	I0924 01:15:12.772068  508400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0924 01:15:12.776077  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0924 01:15:12.787317  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0924 01:15:12.795166  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0924 01:15:12.804311  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0924 01:15:12.814561  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0924 01:15:12.822597  508400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0924 01:15:12.831931  508400 kubeadm.go:392] StartCluster: {Name:no-preload-558135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-558135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins
:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 01:15:12.832042  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0924 01:15:12.832109  508400 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0924 01:15:12.875453  508400 cri.go:89] found id: "898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:15:12.875521  508400 cri.go:89] found id: "a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:15:12.875542  508400 cri.go:89] found id: "e70ec5008d041010bfaf45bf28c1a2e1aafc37e8bd46d9aec69986eb01bd4376"
	I0924 01:15:12.875562  508400 cri.go:89] found id: "9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:15:12.875594  508400 cri.go:89] found id: "49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:15:12.875600  508400 cri.go:89] found id: "6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:15:12.875604  508400 cri.go:89] found id: "1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:15:12.875608  508400 cri.go:89] found id: "51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:15:12.875611  508400 cri.go:89] found id: ""
	I0924 01:15:12.875679  508400 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0924 01:15:12.888402  508400 cri.go:116] JSON = null
	W0924 01:15:12.888452  508400 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0924 01:15:12.888518  508400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0924 01:15:12.897670  508400 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0924 01:15:12.897731  508400 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0924 01:15:12.897808  508400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0924 01:15:12.907186  508400 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0924 01:15:12.907817  508400 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-558135" does not appear in /home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:15:12.908072  508400 kubeconfig.go:62] /home/jenkins/minikube-integration/19696-296322/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-558135" cluster setting kubeconfig missing "no-preload-558135" context setting]
	I0924 01:15:12.908534  508400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/kubeconfig: {Name:mk12cf5f8c4244466c827b22ce4fe2341553290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:15:12.910635  508400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0924 01:15:12.920473  508400 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.85.2
	I0924 01:15:12.920507  508400 kubeadm.go:597] duration metric: took 22.755876ms to restartPrimaryControlPlane
	I0924 01:15:12.920517  508400 kubeadm.go:394] duration metric: took 88.595128ms to StartCluster
	I0924 01:15:12.920532  508400 settings.go:142] acquiring lock: {Name:mk1b01c5281da0b61714a1aa76e5632af5b39da5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:15:12.920616  508400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:15:12.921565  508400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/kubeconfig: {Name:mk12cf5f8c4244466c827b22ce4fe2341553290d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 01:15:12.921789  508400 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0924 01:15:12.922084  508400 config.go:182] Loaded profile config "no-preload-558135": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 01:15:12.922131  508400 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0924 01:15:12.922212  508400 addons.go:69] Setting storage-provisioner=true in profile "no-preload-558135"
	I0924 01:15:12.922231  508400 addons.go:234] Setting addon storage-provisioner=true in "no-preload-558135"
	W0924 01:15:12.922237  508400 addons.go:243] addon storage-provisioner should already be in state true
	I0924 01:15:12.922267  508400 host.go:66] Checking if "no-preload-558135" exists ...
	I0924 01:15:12.922313  508400 addons.go:69] Setting default-storageclass=true in profile "no-preload-558135"
	I0924 01:15:12.922334  508400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-558135"
	I0924 01:15:12.922637  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:12.922943  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:12.923205  508400 addons.go:69] Setting dashboard=true in profile "no-preload-558135"
	I0924 01:15:12.923226  508400 addons.go:234] Setting addon dashboard=true in "no-preload-558135"
	W0924 01:15:12.923233  508400 addons.go:243] addon dashboard should already be in state true
	I0924 01:15:12.923259  508400 host.go:66] Checking if "no-preload-558135" exists ...
	I0924 01:15:12.923684  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:12.926046  508400 out.go:177] * Verifying Kubernetes components...
	I0924 01:15:12.926415  508400 addons.go:69] Setting metrics-server=true in profile "no-preload-558135"
	I0924 01:15:12.926473  508400 addons.go:234] Setting addon metrics-server=true in "no-preload-558135"
	W0924 01:15:12.926496  508400 addons.go:243] addon metrics-server should already be in state true
	I0924 01:15:12.926558  508400 host.go:66] Checking if "no-preload-558135" exists ...
	I0924 01:15:12.927313  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:12.928647  508400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0924 01:15:12.964848  508400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0924 01:15:12.966551  508400 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:15:12.966579  508400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0924 01:15:12.966644  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:12.995962  508400 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0924 01:15:12.998979  508400 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0924 01:15:12.999076  508400 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0924 01:15:12.999168  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:13.011136  508400 addons.go:234] Setting addon default-storageclass=true in "no-preload-558135"
	W0924 01:15:13.011165  508400 addons.go:243] addon default-storageclass should already be in state true
	I0924 01:15:13.011193  508400 host.go:66] Checking if "no-preload-558135" exists ...
	I0924 01:15:13.011627  508400 cli_runner.go:164] Run: docker container inspect no-preload-558135 --format={{.State.Status}}
	I0924 01:15:13.040532  508400 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0924 01:15:13.042583  508400 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0924 01:15:09.676736  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:12.177175  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:14.177211  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:13.044299  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0924 01:15:13.044326  508400 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0924 01:15:13.044402  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:13.054370  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:13.063951  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:13.082463  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:13.084948  508400 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0924 01:15:13.084968  508400 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0924 01:15:13.085036  508400 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-558135
	I0924 01:15:13.127082  508400 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/no-preload-558135/id_rsa Username:docker}
	I0924 01:15:13.143073  508400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0924 01:15:13.223155  508400 node_ready.go:35] waiting up to 6m0s for node "no-preload-558135" to be "Ready" ...
	I0924 01:15:13.247833  508400 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0924 01:15:13.247908  508400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0924 01:15:13.316283  508400 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0924 01:15:13.316369  508400 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0924 01:15:13.350751  508400 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:15:13.350774  508400 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0924 01:15:13.361840  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0924 01:15:13.432285  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:15:13.460305  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0924 01:15:13.460381  508400 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0924 01:15:13.521063  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0924 01:15:13.538871  508400 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.539034  508400 retry.go:31] will retry after 322.555897ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.589463  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0924 01:15:13.589543  508400 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0924 01:15:13.664940  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0924 01:15:13.664978  508400 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W0924 01:15:13.768937  508400 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.768964  508400 retry.go:31] will retry after 315.177653ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.834994  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0924 01:15:13.835015  508400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0924 01:15:13.862460  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0924 01:15:13.904352  508400 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.904388  508400 retry.go:31] will retry after 330.235998ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0924 01:15:13.926984  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0924 01:15:13.927012  508400 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0924 01:15:14.073525  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0924 01:15:14.073551  508400 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0924 01:15:14.084785  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0924 01:15:14.137007  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0924 01:15:14.137032  508400 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0924 01:15:14.234794  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0924 01:15:14.250191  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0924 01:15:14.250218  508400 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0924 01:15:14.396589  508400 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:15:14.396616  508400 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0924 01:15:14.531117  508400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0924 01:15:16.182786  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:18.677465  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:17.871041  508400 node_ready.go:49] node "no-preload-558135" has status "Ready":"True"
	I0924 01:15:17.871071  508400 node_ready.go:38] duration metric: took 4.647836834s for node "no-preload-558135" to be "Ready" ...
	I0924 01:15:17.871081  508400 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:15:17.892084  508400 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-rq7m2" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.907548  508400 pod_ready.go:93] pod "coredns-7c65d6cfc9-rq7m2" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:17.907579  508400 pod_ready.go:82] duration metric: took 15.455843ms for pod "coredns-7c65d6cfc9-rq7m2" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.907592  508400 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.920863  508400 pod_ready.go:93] pod "etcd-no-preload-558135" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:17.920889  508400 pod_ready.go:82] duration metric: took 13.28849ms for pod "etcd-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.920907  508400 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.931582  508400 pod_ready.go:93] pod "kube-apiserver-no-preload-558135" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:17.931607  508400 pod_ready.go:82] duration metric: took 10.692551ms for pod "kube-apiserver-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.931620  508400 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.942482  508400 pod_ready.go:93] pod "kube-controller-manager-no-preload-558135" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:17.942508  508400 pod_ready.go:82] duration metric: took 10.880957ms for pod "kube-controller-manager-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:17.942522  508400 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-krnb9" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:18.076063  508400 pod_ready.go:93] pod "kube-proxy-krnb9" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:18.076092  508400 pod_ready.go:82] duration metric: took 133.561749ms for pod "kube-proxy-krnb9" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:18.076106  508400 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:18.132262  508400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.269764527s)
	I0924 01:15:18.479635  508400 pod_ready.go:93] pod "kube-scheduler-no-preload-558135" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:18.479664  508400 pod_ready.go:82] duration metric: took 403.550305ms for pod "kube-scheduler-no-preload-558135" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:18.479676  508400 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:20.492005  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:21.281804  508400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.196976401s)
	I0924 01:15:21.350090  508400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.115255399s)
	I0924 01:15:21.350125  508400 addons.go:475] Verifying addon metrics-server=true in "no-preload-558135"
	I0924 01:15:21.748451  508400 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.217288701s)
	I0924 01:15:21.750205  508400 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-558135 addons enable metrics-server
	
	I0924 01:15:21.751913  508400 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0924 01:15:20.677676  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:23.177277  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:21.754200  508400 addons.go:510] duration metric: took 8.83205871s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0924 01:15:22.987357  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:25.178088  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:27.678473  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:25.492114  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:27.992267  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:30.178376  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:32.713650  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:30.487533  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:32.987567  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:35.177286  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:37.177700  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:35.486508  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:37.491829  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:39.987739  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:39.675978  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:41.681287  503471 pod_ready.go:103] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:43.177206  503471 pod_ready.go:93] pod "etcd-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:43.177231  503471 pod_ready.go:82] duration metric: took 1m20.506853276s for pod "etcd-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.177248  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.182450  503471 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:43.182476  503471 pod_ready.go:82] duration metric: took 5.21957ms for pod "kube-apiserver-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:43.182507  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:42.485670  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:44.987149  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:45.191626  503471 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:47.190352  503471 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.190379  503471 pod_ready.go:82] duration metric: took 4.00786197s for pod "kube-controller-manager-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.190392  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dctnp" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.195753  503471 pod_ready.go:93] pod "kube-proxy-dctnp" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.195776  503471 pod_ready.go:82] duration metric: took 5.355496ms for pod "kube-proxy-dctnp" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.195787  503471 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.201308  503471 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace has status "Ready":"True"
	I0924 01:15:47.201337  503471 pod_ready.go:82] duration metric: took 5.540809ms for pod "kube-scheduler-old-k8s-version-654890" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.201351  503471 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace to be "Ready" ...
	I0924 01:15:47.487928  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:49.986122  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:49.208658  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:51.707222  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:53.708077  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:52.485989  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:54.486343  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:56.209641  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:58.707466  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:56.986648  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:15:59.486329  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:00.708419  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:03.207876  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:01.986345  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:04.487715  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:05.208533  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:07.707911  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:06.987377  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:09.486389  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:09.707959  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:12.207936  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:11.986562  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:13.988735  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:14.708194  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:17.208113  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:16.485844  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:18.486353  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:19.208473  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:21.710594  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:20.486455  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:22.487120  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:24.487852  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:24.208195  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:26.209405  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:28.713846  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:26.986761  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:29.486659  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:31.208503  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:33.707734  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:31.988539  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:34.486675  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:35.708337  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:38.207839  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:36.986563  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:39.486197  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:40.213566  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:42.709635  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:41.985760  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:43.985931  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:45.211860  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:47.707193  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:45.986189  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:48.486878  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:49.707944  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:51.710202  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:53.712325  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:50.986005  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:52.986296  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:56.212429  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:58.708373  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:55.485852  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:57.488641  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:16:59.985527  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:01.208889  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:03.227933  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:01.989793  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:03.990725  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:05.709089  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:08.208394  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:06.488664  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:08.986286  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:10.208508  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:12.707726  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:11.485880  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:13.490174  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:15.208067  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:17.208790  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:15.986288  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:17.986364  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:19.209149  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:21.708671  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:20.485872  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:22.486441  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:24.486873  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:24.216251  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:26.707770  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:26.985850  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:29.005750  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:29.208529  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:31.709364  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:33.761534  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:31.486011  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:33.486893  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:36.208565  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:38.707551  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:35.985760  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:37.986094  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:39.986371  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:40.707768  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:42.707814  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:41.987251  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:43.987616  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:44.707848  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:46.708038  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:48.708177  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:46.485566  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:48.486342  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:51.207438  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:53.208301  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:50.985619  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:52.986354  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:55.707839  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:58.208516  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:55.486824  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:57.489396  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:17:59.985641  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:00.235354  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:02.707695  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:01.987756  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:04.486204  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:05.208024  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:07.707243  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:06.486823  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:08.985919  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:09.707408  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:11.707886  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:10.986437  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:13.485686  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:14.207930  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:16.707304  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:15.486824  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:17.487910  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:19.985698  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:19.209677  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:21.708310  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:21.985762  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:23.986376  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:24.207628  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:26.208190  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:28.707753  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:26.485064  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:28.485541  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:31.207884  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:33.208047  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:30.486377  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:32.986106  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:34.987626  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:35.208663  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:37.707832  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:37.488431  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:39.986291  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:40.207918  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:42.209391  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:41.986776  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:44.485574  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:44.710598  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:47.207389  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:46.486185  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:48.486314  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:49.207968  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:51.208089  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:53.215727  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:50.985833  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:52.986186  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:55.707466  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:57.707846  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:55.486493  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:57.487971  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:18:59.985819  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:00.226503  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:02.708170  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:02.485654  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:04.487225  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:05.207792  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:07.208012  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:06.985291  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:08.985854  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:09.208719  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:11.707807  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:13.707883  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:10.986372  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:12.986504  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:16.208372  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:18.208583  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:15.485785  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:17.490242  508400 pod_ready.go:103] pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:18.485829  508400 pod_ready.go:82] duration metric: took 4m0.006138258s for pod "metrics-server-6867b74b74-46xh4" in "kube-system" namespace to be "Ready" ...
	E0924 01:19:18.485857  508400 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:19:18.485867  508400 pod_ready.go:39] duration metric: took 4m0.614774773s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:19:18.485890  508400 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:19:18.485932  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:18.486000  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:18.529126  508400 cri.go:89] found id: "cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:18.529150  508400 cri.go:89] found id: "49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:18.529155  508400 cri.go:89] found id: ""
	I0924 01:19:18.529163  508400 logs.go:276] 2 containers: [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec]
	I0924 01:19:18.529221  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.533367  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.537119  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:18.537199  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:18.576445  508400 cri.go:89] found id: "c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:18.576465  508400 cri.go:89] found id: "1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:18.576470  508400 cri.go:89] found id: ""
	I0924 01:19:18.576477  508400 logs.go:276] 2 containers: [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b]
	I0924 01:19:18.576536  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.580457  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.584094  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:18.584170  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:18.624141  508400 cri.go:89] found id: "34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:18.624167  508400 cri.go:89] found id: "898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:18.624171  508400 cri.go:89] found id: ""
	I0924 01:19:18.624178  508400 logs.go:276] 2 containers: [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6]
	I0924 01:19:18.624235  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.629332  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.633865  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:18.633952  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:18.678615  508400 cri.go:89] found id: "5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:18.678638  508400 cri.go:89] found id: "6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:18.678645  508400 cri.go:89] found id: ""
	I0924 01:19:18.678652  508400 logs.go:276] 2 containers: [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4]
	I0924 01:19:18.678718  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.682263  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.685672  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:18.685749  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:18.727441  508400 cri.go:89] found id: "9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:18.727461  508400 cri.go:89] found id: "9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:18.727466  508400 cri.go:89] found id: ""
	I0924 01:19:18.727473  508400 logs.go:276] 2 containers: [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9]
	I0924 01:19:18.727530  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.731228  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.734832  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:18.735020  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:18.785451  508400 cri.go:89] found id: "23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:18.785475  508400 cri.go:89] found id: "51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:18.785481  508400 cri.go:89] found id: ""
	I0924 01:19:18.785488  508400 logs.go:276] 2 containers: [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f]
	I0924 01:19:18.785574  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.791026  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.794884  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:18.794998  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:18.835569  508400 cri.go:89] found id: "84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:18.835642  508400 cri.go:89] found id: "a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:18.835661  508400 cri.go:89] found id: ""
	I0924 01:19:18.835701  508400 logs.go:276] 2 containers: [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7]
	I0924 01:19:18.835774  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.839386  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.843218  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:18.843290  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:18.884277  508400 cri.go:89] found id: "8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:18.884343  508400 cri.go:89] found id: ""
	I0924 01:19:18.884367  508400 logs.go:276] 1 containers: [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f]
	I0924 01:19:18.884452  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.888639  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:18.888754  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:18.931032  508400 cri.go:89] found id: "e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:18.931103  508400 cri.go:89] found id: "9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:18.931123  508400 cri.go:89] found id: ""
	I0924 01:19:18.931147  508400 logs.go:276] 2 containers: [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e]
	I0924 01:19:18.931241  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.940238  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:18.944062  508400 logs.go:123] Gathering logs for container status ...
	I0924 01:19:18.944089  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:19:19.001414  508400 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:19:19.001468  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:19:19.147929  508400 logs.go:123] Gathering logs for kube-proxy [9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9] ...
	I0924 01:19:19.147958  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:19.200173  508400 logs.go:123] Gathering logs for kindnet [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb] ...
	I0924 01:19:19.200208  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:19.243136  508400 logs.go:123] Gathering logs for kindnet [a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7] ...
	I0924 01:19:19.243208  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:19.283229  508400 logs.go:123] Gathering logs for storage-provisioner [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc] ...
	I0924 01:19:19.283259  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:19.336221  508400 logs.go:123] Gathering logs for containerd ...
	I0924 01:19:19.336249  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:19:19.407651  508400 logs.go:123] Gathering logs for dmesg ...
	I0924 01:19:19.407689  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:19:19.426384  508400 logs.go:123] Gathering logs for etcd [1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b] ...
	I0924 01:19:19.426421  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:19.482387  508400 logs.go:123] Gathering logs for kube-proxy [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff] ...
	I0924 01:19:19.482418  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:19.524196  508400 logs.go:123] Gathering logs for kube-controller-manager [51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f] ...
	I0924 01:19:19.524226  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:19.586996  508400 logs.go:123] Gathering logs for kubernetes-dashboard [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f] ...
	I0924 01:19:19.587031  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:19.629351  508400 logs.go:123] Gathering logs for storage-provisioner [9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e] ...
	I0924 01:19:19.629388  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:19.684132  508400 logs.go:123] Gathering logs for etcd [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6] ...
	I0924 01:19:19.684160  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:19.733733  508400 logs.go:123] Gathering logs for kube-scheduler [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b] ...
	I0924 01:19:19.733769  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:19.779385  508400 logs.go:123] Gathering logs for kube-scheduler [6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4] ...
	I0924 01:19:19.779418  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:19.828325  508400 logs.go:123] Gathering logs for kube-controller-manager [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23] ...
	I0924 01:19:19.828358  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:19.896856  508400 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:19.896894  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:19.946874  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:19.947161  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:19.978126  508400 logs.go:123] Gathering logs for kube-apiserver [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd] ...
	I0924 01:19:19.978159  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:20.047655  508400 logs.go:123] Gathering logs for kube-apiserver [49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec] ...
	I0924 01:19:20.047695  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:20.100126  508400 logs.go:123] Gathering logs for coredns [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1] ...
	I0924 01:19:20.100164  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:20.707950  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:22.711943  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:20.146416  508400 logs.go:123] Gathering logs for coredns [898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6] ...
	I0924 01:19:20.146455  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:20.188342  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:20.188407  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:19:20.188462  508400 out.go:270] X Problems detected in kubelet:
	W0924 01:19:20.188473  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:20.188481  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:20.188492  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:20.188503  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:19:24.759385  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:27.207971  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:29.209588  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:31.709364  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:30.189595  508400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:19:30.203444  508400 api_server.go:72] duration metric: took 4m17.281610313s to wait for apiserver process to appear ...
	I0924 01:19:30.203472  508400 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:19:30.203510  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:30.203572  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:30.250621  508400 cri.go:89] found id: "cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:30.250649  508400 cri.go:89] found id: "49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:30.250655  508400 cri.go:89] found id: ""
	I0924 01:19:30.250662  508400 logs.go:276] 2 containers: [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec]
	I0924 01:19:30.250731  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.254985  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.258840  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:30.258976  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:30.304331  508400 cri.go:89] found id: "c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:30.304366  508400 cri.go:89] found id: "1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:30.304372  508400 cri.go:89] found id: ""
	I0924 01:19:30.304380  508400 logs.go:276] 2 containers: [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b]
	I0924 01:19:30.304452  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.308595  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.312424  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:30.312522  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:30.355973  508400 cri.go:89] found id: "34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:30.355995  508400 cri.go:89] found id: "898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:30.356000  508400 cri.go:89] found id: ""
	I0924 01:19:30.356007  508400 logs.go:276] 2 containers: [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6]
	I0924 01:19:30.356066  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.360054  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.363820  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:30.363908  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:30.403855  508400 cri.go:89] found id: "5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:30.403878  508400 cri.go:89] found id: "6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:30.403882  508400 cri.go:89] found id: ""
	I0924 01:19:30.403891  508400 logs.go:276] 2 containers: [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4]
	I0924 01:19:30.403950  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.407797  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.411816  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:30.411927  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:30.452160  508400 cri.go:89] found id: "9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:30.452182  508400 cri.go:89] found id: "9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:30.452186  508400 cri.go:89] found id: ""
	I0924 01:19:30.452194  508400 logs.go:276] 2 containers: [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9]
	I0924 01:19:30.452253  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.456160  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.461191  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:30.461297  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:30.501612  508400 cri.go:89] found id: "23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:30.501638  508400 cri.go:89] found id: "51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:30.501643  508400 cri.go:89] found id: ""
	I0924 01:19:30.501651  508400 logs.go:276] 2 containers: [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f]
	I0924 01:19:30.501727  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.505950  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.511621  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:30.511745  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:30.555419  508400 cri.go:89] found id: "84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:30.555482  508400 cri.go:89] found id: "a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:30.555501  508400 cri.go:89] found id: ""
	I0924 01:19:30.555514  508400 logs.go:276] 2 containers: [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7]
	I0924 01:19:30.555577  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.559761  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.579349  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:30.579430  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:30.625537  508400 cri.go:89] found id: "e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:30.625563  508400 cri.go:89] found id: "9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:30.625569  508400 cri.go:89] found id: ""
	I0924 01:19:30.625576  508400 logs.go:276] 2 containers: [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e]
	I0924 01:19:30.625636  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.639169  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.643075  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:30.643173  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:30.684254  508400 cri.go:89] found id: "8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:30.684320  508400 cri.go:89] found id: ""
	I0924 01:19:30.684335  508400 logs.go:276] 1 containers: [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f]
	I0924 01:19:30.684408  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:30.689179  508400 logs.go:123] Gathering logs for kubernetes-dashboard [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f] ...
	I0924 01:19:30.689202  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:30.747384  508400 logs.go:123] Gathering logs for container status ...
	I0924 01:19:30.747456  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:19:30.812660  508400 logs.go:123] Gathering logs for dmesg ...
	I0924 01:19:30.812692  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:19:30.830597  508400 logs.go:123] Gathering logs for etcd [1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b] ...
	I0924 01:19:30.830635  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:30.884335  508400 logs.go:123] Gathering logs for kindnet [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb] ...
	I0924 01:19:30.884367  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:30.931500  508400 logs.go:123] Gathering logs for storage-provisioner [9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e] ...
	I0924 01:19:30.931534  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:30.975380  508400 logs.go:123] Gathering logs for kube-scheduler [6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4] ...
	I0924 01:19:30.975409  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:31.033898  508400 logs.go:123] Gathering logs for kube-controller-manager [51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f] ...
	I0924 01:19:31.033931  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:31.107884  508400 logs.go:123] Gathering logs for containerd ...
	I0924 01:19:31.107924  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:19:31.179670  508400 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:31.179718  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:31.228684  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:31.228936  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:31.260822  508400 logs.go:123] Gathering logs for kube-apiserver [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd] ...
	I0924 01:19:31.260864  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:31.315545  508400 logs.go:123] Gathering logs for etcd [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6] ...
	I0924 01:19:31.315582  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:31.367154  508400 logs.go:123] Gathering logs for coredns [898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6] ...
	I0924 01:19:31.367188  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:31.407305  508400 logs.go:123] Gathering logs for coredns [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1] ...
	I0924 01:19:31.407336  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:31.450595  508400 logs.go:123] Gathering logs for kube-controller-manager [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23] ...
	I0924 01:19:31.450624  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:31.519493  508400 logs.go:123] Gathering logs for kindnet [a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7] ...
	I0924 01:19:31.519530  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:31.565836  508400 logs.go:123] Gathering logs for storage-provisioner [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc] ...
	I0924 01:19:31.565870  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:31.608057  508400 logs.go:123] Gathering logs for kube-proxy [9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9] ...
	I0924 01:19:31.608094  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:31.658026  508400 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:19:31.658055  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:19:31.809123  508400 logs.go:123] Gathering logs for kube-apiserver [49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec] ...
	I0924 01:19:31.809220  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:31.865852  508400 logs.go:123] Gathering logs for kube-scheduler [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b] ...
	I0924 01:19:31.865890  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:31.907118  508400 logs.go:123] Gathering logs for kube-proxy [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff] ...
	I0924 01:19:31.907148  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:31.947363  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:31.947389  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:19:31.947464  508400 out.go:270] X Problems detected in kubelet:
	W0924 01:19:31.947482  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:31.947510  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:31.947533  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:31.947544  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:19:34.212724  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:36.708375  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:39.208776  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:41.708313  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:43.708560  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:41.949138  508400 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0924 01:19:41.957620  508400 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0924 01:19:41.958688  508400 api_server.go:141] control plane version: v1.31.1
	I0924 01:19:41.958713  508400 api_server.go:131] duration metric: took 11.755233225s to wait for apiserver health ...
	I0924 01:19:41.958721  508400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0924 01:19:41.958744  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:41.958811  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:41.999106  508400 cri.go:89] found id: "cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:41.999127  508400 cri.go:89] found id: "49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:41.999133  508400 cri.go:89] found id: ""
	I0924 01:19:41.999140  508400 logs.go:276] 2 containers: [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec]
	I0924 01:19:41.999204  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.009402  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.030811  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:42.030898  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:42.074536  508400 cri.go:89] found id: "c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:42.074567  508400 cri.go:89] found id: "1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:42.074576  508400 cri.go:89] found id: ""
	I0924 01:19:42.074585  508400 logs.go:276] 2 containers: [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b]
	I0924 01:19:42.074654  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.079721  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.085064  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:42.085211  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:42.138851  508400 cri.go:89] found id: "34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:42.138971  508400 cri.go:89] found id: "898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:42.139001  508400 cri.go:89] found id: ""
	I0924 01:19:42.139031  508400 logs.go:276] 2 containers: [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6]
	I0924 01:19:42.139132  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.144225  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.148925  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:42.149016  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:42.227052  508400 cri.go:89] found id: "5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:42.227083  508400 cri.go:89] found id: "6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:42.227090  508400 cri.go:89] found id: ""
	I0924 01:19:42.227099  508400 logs.go:276] 2 containers: [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4]
	I0924 01:19:42.227226  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.232021  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.236592  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:42.236678  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:42.284520  508400 cri.go:89] found id: "9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:42.284548  508400 cri.go:89] found id: "9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:42.284553  508400 cri.go:89] found id: ""
	I0924 01:19:42.284562  508400 logs.go:276] 2 containers: [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9]
	I0924 01:19:42.284649  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.289013  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.293491  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:42.293622  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:42.338701  508400 cri.go:89] found id: "23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:42.338725  508400 cri.go:89] found id: "51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:42.338731  508400 cri.go:89] found id: ""
	I0924 01:19:42.338738  508400 logs.go:276] 2 containers: [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f]
	I0924 01:19:42.338797  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.343336  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.348109  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:42.348258  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:42.389487  508400 cri.go:89] found id: "84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:42.389565  508400 cri.go:89] found id: "a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:42.389584  508400 cri.go:89] found id: ""
	I0924 01:19:42.389609  508400 logs.go:276] 2 containers: [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7]
	I0924 01:19:42.389690  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.393712  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.397552  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:42.397629  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:42.436236  508400 cri.go:89] found id: "8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:42.436260  508400 cri.go:89] found id: ""
	I0924 01:19:42.436268  508400 logs.go:276] 1 containers: [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f]
	I0924 01:19:42.436355  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.440309  508400 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:42.440433  508400 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:42.489097  508400 cri.go:89] found id: "e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:42.489119  508400 cri.go:89] found id: "9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:42.489136  508400 cri.go:89] found id: ""
	I0924 01:19:42.489143  508400 logs.go:276] 2 containers: [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e]
	I0924 01:19:42.489201  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.493130  508400 ssh_runner.go:195] Run: which crictl
	I0924 01:19:42.496655  508400 logs.go:123] Gathering logs for dmesg ...
	I0924 01:19:42.496680  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:19:42.514763  508400 logs.go:123] Gathering logs for coredns [898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6] ...
	I0924 01:19:42.514788  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 898641ce3e6f61272d8acf7eb77c1646f8b44e4f93aa22994aa8df7184e73fd6"
	I0924 01:19:42.560280  508400 logs.go:123] Gathering logs for kube-proxy [9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9] ...
	I0924 01:19:42.560319  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9698bd1ae5c4cb7085eb93df40c4a78c0e947b6b6096887cb32d3d3b3f84f3d9"
	I0924 01:19:42.602776  508400 logs.go:123] Gathering logs for kindnet [a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7] ...
	I0924 01:19:42.602808  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a051a2b449ff0ba4bdc77398857a62cef81b55d77c316d3c89f44d0d2b0880a7"
	I0924 01:19:42.650233  508400 logs.go:123] Gathering logs for kubernetes-dashboard [8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f] ...
	I0924 01:19:42.650263  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8577d1f6eee27ab28182821d47b50b803412dec67c963927e9a74c5760656c8f"
	I0924 01:19:42.693972  508400 logs.go:123] Gathering logs for storage-provisioner [e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc] ...
	I0924 01:19:42.694007  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e2fc67748993981d4c6eb2f308b5b4c9936e8992daabaca65541da638dc7bafc"
	I0924 01:19:42.741731  508400 logs.go:123] Gathering logs for containerd ...
	I0924 01:19:42.741758  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:19:42.801318  508400 logs.go:123] Gathering logs for kube-scheduler [5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b] ...
	I0924 01:19:42.801355  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ee7283b07ea8dd92daaeb3fe86c762c21998f8488e35eb45abc804e28da5a1b"
	I0924 01:19:42.838788  508400 logs.go:123] Gathering logs for kube-scheduler [6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4] ...
	I0924 01:19:42.838856  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6a91941381cb2fbd806c1cdd6b71ff46da40752cc4a426d4d6d1e3244a59a5d4"
	I0924 01:19:42.903844  508400 logs.go:123] Gathering logs for kindnet [84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb] ...
	I0924 01:19:42.903881  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 84c427884fc8400e3e9c0d9609d187b64209b2ed3311860691b883ec72df07eb"
	I0924 01:19:42.954874  508400 logs.go:123] Gathering logs for container status ...
	I0924 01:19:42.954913  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:19:43.060093  508400 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:43.060121  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:43.115558  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:43.115823  508400 logs.go:138] Found kubelet problem: Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:43.148304  508400 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:19:43.148343  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:19:43.284610  508400 logs.go:123] Gathering logs for kube-apiserver [49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec] ...
	I0924 01:19:43.284641  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 49fe7dc079e9bff501f95307c0034839334c0e54affc0c752936b8e78ad4cbec"
	I0924 01:19:43.350548  508400 logs.go:123] Gathering logs for etcd [c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6] ...
	I0924 01:19:43.350579  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1e10e0c55d8c40ab07efefe182096b7d34a0c512646431ef6ffb097fd31a1a6"
	I0924 01:19:43.436344  508400 logs.go:123] Gathering logs for etcd [1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b] ...
	I0924 01:19:43.436381  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1cb94a371664ea3677fa31112c560d578dac0f9675204f133025c555395cdf7b"
	I0924 01:19:43.493282  508400 logs.go:123] Gathering logs for storage-provisioner [9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e] ...
	I0924 01:19:43.493313  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9cf16f88c6bf005843393498b031408abccf86b16b40f53e85b3479bb4dfc17e"
	I0924 01:19:43.535602  508400 logs.go:123] Gathering logs for kube-apiserver [cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd] ...
	I0924 01:19:43.535684  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cbade5a359066cba74a1f6bc775533cc3c5cb2c6d38523b102789f5bece0f7cd"
	I0924 01:19:43.592948  508400 logs.go:123] Gathering logs for coredns [34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1] ...
	I0924 01:19:43.592982  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 34ec1669638ee25702af042937a534774b4fb1cb2f3ca4df2ce689ca6b58dbd1"
	I0924 01:19:43.647804  508400 logs.go:123] Gathering logs for kube-proxy [9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff] ...
	I0924 01:19:43.647836  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c1c31ea42d18dcf69a39e06236994a78de89e270308135e53ca3886ca3120ff"
	I0924 01:19:43.686861  508400 logs.go:123] Gathering logs for kube-controller-manager [23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23] ...
	I0924 01:19:43.686891  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 23bb6483ab495904a5558786abb963f34754345c899d60d23a4dba22add17c23"
	I0924 01:19:43.760213  508400 logs.go:123] Gathering logs for kube-controller-manager [51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f] ...
	I0924 01:19:43.760249  508400 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 51be5c6b298bd9996efc77eff4f6ddceeee75d629fefc08a26771c02220a8a8f"
	I0924 01:19:43.822376  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:43.822405  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:19:43.822484  508400 out.go:270] X Problems detected in kubelet:
	W0924 01:19:43.822498  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: W0924 01:15:21.587778     657 reflector.go:561] object-"kubernetes-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:no-preload-558135" cannot list resource "configmaps" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'no-preload-558135' and this object
	W0924 01:19:43.822525  508400 out.go:270]   Sep 24 01:15:21 no-preload-558135 kubelet[657]: E0924 01:15:21.587830     657 reflector.go:158] "Unhandled Error" err="object-\"kubernetes-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:no-preload-558135\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'no-preload-558135' and this object" logger="UnhandledError"
	I0924 01:19:43.822542  508400 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:43.822549  508400 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:19:46.207038  503471 pod_ready.go:103] pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace has status "Ready":"False"
	I0924 01:19:47.207950  503471 pod_ready.go:82] duration metric: took 4m0.006584819s for pod "metrics-server-9975d5f86-5qvnr" in "kube-system" namespace to be "Ready" ...
	E0924 01:19:47.207975  503471 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0924 01:19:47.207985  503471 pod_ready.go:39] duration metric: took 5m24.781817041s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0924 01:19:47.207999  503471 api_server.go:52] waiting for apiserver process to appear ...
	I0924 01:19:47.208031  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:47.208103  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:47.248170  503471 cri.go:89] found id: "0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:47.248237  503471 cri.go:89] found id: "a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:47.248255  503471 cri.go:89] found id: ""
	I0924 01:19:47.248294  503471 logs.go:276] 2 containers: [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c]
	I0924 01:19:47.248373  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.252444  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.255922  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:47.256037  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:47.299255  503471 cri.go:89] found id: "1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:47.299279  503471 cri.go:89] found id: "4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:47.299284  503471 cri.go:89] found id: ""
	I0924 01:19:47.299291  503471 logs.go:276] 2 containers: [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2]
	I0924 01:19:47.299363  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.303065  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.307165  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:47.307239  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:47.344657  503471 cri.go:89] found id: "726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:47.344678  503471 cri.go:89] found id: "ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:47.344683  503471 cri.go:89] found id: ""
	I0924 01:19:47.344690  503471 logs.go:276] 2 containers: [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c]
	I0924 01:19:47.344774  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.348345  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.352584  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:47.352658  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:47.397310  503471 cri.go:89] found id: "11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:47.397331  503471 cri.go:89] found id: "92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:47.397336  503471 cri.go:89] found id: ""
	I0924 01:19:47.397343  503471 logs.go:276] 2 containers: [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3]
	I0924 01:19:47.397400  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.401198  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.404571  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:47.404647  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:47.450078  503471 cri.go:89] found id: "a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:47.450102  503471 cri.go:89] found id: "a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:47.450107  503471 cri.go:89] found id: ""
	I0924 01:19:47.450114  503471 logs.go:276] 2 containers: [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c]
	I0924 01:19:47.450195  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.454086  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.457899  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:47.457973  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:47.505274  503471 cri.go:89] found id: "14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:47.505344  503471 cri.go:89] found id: "840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:47.505363  503471 cri.go:89] found id: ""
	I0924 01:19:47.505390  503471 logs.go:276] 2 containers: [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55]
	I0924 01:19:47.505489  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.509368  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.513105  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:47.513224  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:47.552883  503471 cri.go:89] found id: "c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:47.552915  503471 cri.go:89] found id: "321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:47.552922  503471 cri.go:89] found id: ""
	I0924 01:19:47.552930  503471 logs.go:276] 2 containers: [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee]
	I0924 01:19:47.553023  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.556760  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.560250  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:47.560322  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:47.601478  503471 cri.go:89] found id: "ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:47.601521  503471 cri.go:89] found id: ""
	I0924 01:19:47.601530  503471 logs.go:276] 1 containers: [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63]
	I0924 01:19:47.601588  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.605414  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:47.605493  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:47.662058  503471 cri.go:89] found id: "fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:47.662083  503471 cri.go:89] found id: "fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:47.662088  503471 cri.go:89] found id: ""
	I0924 01:19:47.662096  503471 logs.go:276] 2 containers: [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d]
	I0924 01:19:47.662156  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.666136  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:47.669638  503471 logs.go:123] Gathering logs for coredns [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9] ...
	I0924 01:19:47.669676  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:47.710636  503471 logs.go:123] Gathering logs for kube-scheduler [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57] ...
	I0924 01:19:47.710669  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:47.753288  503471 logs.go:123] Gathering logs for container status ...
	I0924 01:19:47.753320  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:19:47.800841  503471 logs.go:123] Gathering logs for dmesg ...
	I0924 01:19:47.800872  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:19:47.817994  503471 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:19:47.818024  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:19:47.981580  503471 logs.go:123] Gathering logs for kube-apiserver [a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c] ...
	I0924 01:19:47.981616  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:48.052176  503471 logs.go:123] Gathering logs for containerd ...
	I0924 01:19:48.052216  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:19:48.118198  503471 logs.go:123] Gathering logs for kube-proxy [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d] ...
	I0924 01:19:48.118240  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:48.160574  503471 logs.go:123] Gathering logs for kubernetes-dashboard [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63] ...
	I0924 01:19:48.160607  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:48.205909  503471 logs.go:123] Gathering logs for storage-provisioner [fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d] ...
	I0924 01:19:48.205939  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:48.245163  503471 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:48.245190  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:48.298751  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431464     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-6n88c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6n88c" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299014  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431566     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299243  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431638     666 reflector.go:138] object-"kube-system"/"kindnet-token-jt6n9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jt6n9" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299478  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431704     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g5gtv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g5gtv" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299693  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431771     666 reflector.go:138] object-"default"/"default-token-2t7hj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2t7hj" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.299915  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431832     666 reflector.go:138] object-"kube-system"/"metrics-server-token-dpjw8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dpjw8" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.300143  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433069     666 reflector.go:138] object-"kube-system"/"coredns-token-djfwt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-djfwt" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.300363  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433138     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:48.307885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:24 old-k8s-version-654890 kubelet[666]: E0924 01:14:24.244333     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.309456  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:25 old-k8s-version-654890 kubelet[666]: E0924 01:14:25.186030     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.312301  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:39 old-k8s-version-654890 kubelet[666]: E0924 01:14:39.793083     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.314771  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:51 old-k8s-version-654890 kubelet[666]: E0924 01:14:51.304991     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.315114  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:52 old-k8s-version-654890 kubelet[666]: E0924 01:14:52.318075     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.315302  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:53 old-k8s-version-654890 kubelet[666]: E0924 01:14:53.784377     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.315745  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:55 old-k8s-version-654890 kubelet[666]: E0924 01:14:55.344748     666 pod_workers.go:191] Error syncing pod c12ca6a0-fd9b-45bf-9da0-2ec1193cce32 ("storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"
	W0924 01:19:48.316678  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:01 old-k8s-version-654890 kubelet[666]: E0924 01:15:01.366255     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.319205  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:07 old-k8s-version-654890 kubelet[666]: E0924 01:15:07.792722     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.319672  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:11 old-k8s-version-654890 kubelet[666]: E0924 01:15:11.008383     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.319860  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:20 old-k8s-version-654890 kubelet[666]: E0924 01:15:20.784619     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.320481  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:26 old-k8s-version-654890 kubelet[666]: E0924 01:15:26.452520     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.320821  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:30 old-k8s-version-654890 kubelet[666]: E0924 01:15:30.986615     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.321011  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:33 old-k8s-version-654890 kubelet[666]: E0924 01:15:33.783850     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.321345  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.783545     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.321534  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.784908     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.321873  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.784164     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.324342  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.792225     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.324660  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:14 old-k8s-version-654890 kubelet[666]: E0924 01:16:14.789130     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.325123  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:15 old-k8s-version-654890 kubelet[666]: E0924 01:16:15.580475     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.325467  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:20 old-k8s-version-654890 kubelet[666]: E0924 01:16:20.992161     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.325652  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:29 old-k8s-version-654890 kubelet[666]: E0924 01:16:29.787709     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.326017  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:34 old-k8s-version-654890 kubelet[666]: E0924 01:16:34.784335     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.326235  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:42 old-k8s-version-654890 kubelet[666]: E0924 01:16:42.784427     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.326575  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:45 old-k8s-version-654890 kubelet[666]: E0924 01:16:45.783505     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.326768  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:53 old-k8s-version-654890 kubelet[666]: E0924 01:16:53.784108     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.327107  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:57 old-k8s-version-654890 kubelet[666]: E0924 01:16:57.783801     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.327294  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:06 old-k8s-version-654890 kubelet[666]: E0924 01:17:06.784121     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.327631  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:11 old-k8s-version-654890 kubelet[666]: E0924 01:17:11.783481     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.327817  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:18 old-k8s-version-654890 kubelet[666]: E0924 01:17:18.783911     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.328150  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:24 old-k8s-version-654890 kubelet[666]: E0924 01:17:24.784083     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.330620  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:31 old-k8s-version-654890 kubelet[666]: E0924 01:17:31.792392     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:48.331220  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:39 old-k8s-version-654890 kubelet[666]: E0924 01:17:39.856284     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.331553  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:40 old-k8s-version-654890 kubelet[666]: E0924 01:17:40.985923     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.331738  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:45 old-k8s-version-654890 kubelet[666]: E0924 01:17:45.784024     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.332067  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:55 old-k8s-version-654890 kubelet[666]: E0924 01:17:55.783900     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.332252  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:59 old-k8s-version-654890 kubelet[666]: E0924 01:17:59.784186     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.332585  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:06 old-k8s-version-654890 kubelet[666]: E0924 01:18:06.788365     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.332774  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:12 old-k8s-version-654890 kubelet[666]: E0924 01:18:12.783859     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.333106  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:18 old-k8s-version-654890 kubelet[666]: E0924 01:18:18.783560     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.333291  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:27 old-k8s-version-654890 kubelet[666]: E0924 01:18:27.783904     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.333627  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:30 old-k8s-version-654890 kubelet[666]: E0924 01:18:30.783887     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.333814  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:41 old-k8s-version-654890 kubelet[666]: E0924 01:18:41.784268     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.334149  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: E0924 01:18:44.783947     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.334334  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:55 old-k8s-version-654890 kubelet[666]: E0924 01:18:55.783966     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.334664  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: E0924 01:18:58.784580     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.334851  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:09 old-k8s-version-654890 kubelet[666]: E0924 01:19:09.783890     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.335185  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: E0924 01:19:11.783671     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.335370  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.335699  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.336028  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.336213  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.336399  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0924 01:19:48.336409  503471 logs.go:123] Gathering logs for kube-controller-manager [840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55] ...
	I0924 01:19:48.336425  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:48.394895  503471 logs.go:123] Gathering logs for kindnet [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335] ...
	I0924 01:19:48.394935  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:48.445555  503471 logs.go:123] Gathering logs for coredns [ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c] ...
	I0924 01:19:48.445586  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:48.484819  503471 logs.go:123] Gathering logs for kube-scheduler [92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3] ...
	I0924 01:19:48.484886  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:48.531995  503471 logs.go:123] Gathering logs for kube-proxy [a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c] ...
	I0924 01:19:48.532082  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:48.573118  503471 logs.go:123] Gathering logs for kube-controller-manager [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b] ...
	I0924 01:19:48.573189  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:48.633525  503471 logs.go:123] Gathering logs for kindnet [321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee] ...
	I0924 01:19:48.633563  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:48.680372  503471 logs.go:123] Gathering logs for kube-apiserver [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f] ...
	I0924 01:19:48.680403  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:48.742350  503471 logs.go:123] Gathering logs for etcd [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e] ...
	I0924 01:19:48.742384  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:48.797001  503471 logs.go:123] Gathering logs for etcd [4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2] ...
	I0924 01:19:48.797035  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:48.847657  503471 logs.go:123] Gathering logs for storage-provisioner [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2] ...
	I0924 01:19:48.847687  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:48.891111  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:48.891138  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:19:48.891192  503471 out.go:270] X Problems detected in kubelet:
	W0924 01:19:48.891209  503471 out.go:270]   Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.891225  503471 out.go:270]   Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.891256  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:48.891265  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:48.891275  503471 out.go:270]   Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0924 01:19:48.891281  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:19:48.891290  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:19:53.831041  508400 system_pods.go:59] 9 kube-system pods found
	I0924 01:19:53.831084  508400 system_pods.go:61] "coredns-7c65d6cfc9-rq7m2" [3466c720-c315-49c5-ab82-124e89533aef] Running
	I0924 01:19:53.831092  508400 system_pods.go:61] "etcd-no-preload-558135" [ff66b09a-46f2-4b17-88b0-afb588b773c6] Running
	I0924 01:19:53.831096  508400 system_pods.go:61] "kindnet-f8qbt" [53c3dc9b-cea2-4ff8-b553-dd0c30157e23] Running
	I0924 01:19:53.831198  508400 system_pods.go:61] "kube-apiserver-no-preload-558135" [db3f8c5f-20a4-4c42-b78c-8c27470be367] Running
	I0924 01:19:53.831204  508400 system_pods.go:61] "kube-controller-manager-no-preload-558135" [dfc5a382-d975-4dc1-b20b-90cfd175b47c] Running
	I0924 01:19:53.831209  508400 system_pods.go:61] "kube-proxy-krnb9" [186e9c5d-3693-48a6-9657-955801ac448d] Running
	I0924 01:19:53.831213  508400 system_pods.go:61] "kube-scheduler-no-preload-558135" [890e8fae-d914-46ad-bb5a-7778684d10c8] Running
	I0924 01:19:53.831220  508400 system_pods.go:61] "metrics-server-6867b74b74-46xh4" [4e3e7ba3-4efd-4a9b-9682-443e9112afba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:19:53.831231  508400 system_pods.go:61] "storage-provisioner" [7ba817a8-e447-48d8-9b19-14a59da5d457] Running
	I0924 01:19:53.831240  508400 system_pods.go:74] duration metric: took 11.872511683s to wait for pod list to return data ...
	I0924 01:19:53.831264  508400 default_sa.go:34] waiting for default service account to be created ...
	I0924 01:19:53.834369  508400 default_sa.go:45] found service account: "default"
	I0924 01:19:53.834401  508400 default_sa.go:55] duration metric: took 3.110534ms for default service account to be created ...
	I0924 01:19:53.834412  508400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0924 01:19:53.841050  508400 system_pods.go:86] 9 kube-system pods found
	I0924 01:19:53.841085  508400 system_pods.go:89] "coredns-7c65d6cfc9-rq7m2" [3466c720-c315-49c5-ab82-124e89533aef] Running
	I0924 01:19:53.841093  508400 system_pods.go:89] "etcd-no-preload-558135" [ff66b09a-46f2-4b17-88b0-afb588b773c6] Running
	I0924 01:19:53.841098  508400 system_pods.go:89] "kindnet-f8qbt" [53c3dc9b-cea2-4ff8-b553-dd0c30157e23] Running
	I0924 01:19:53.841102  508400 system_pods.go:89] "kube-apiserver-no-preload-558135" [db3f8c5f-20a4-4c42-b78c-8c27470be367] Running
	I0924 01:19:53.841108  508400 system_pods.go:89] "kube-controller-manager-no-preload-558135" [dfc5a382-d975-4dc1-b20b-90cfd175b47c] Running
	I0924 01:19:53.841112  508400 system_pods.go:89] "kube-proxy-krnb9" [186e9c5d-3693-48a6-9657-955801ac448d] Running
	I0924 01:19:53.841116  508400 system_pods.go:89] "kube-scheduler-no-preload-558135" [890e8fae-d914-46ad-bb5a-7778684d10c8] Running
	I0924 01:19:53.841124  508400 system_pods.go:89] "metrics-server-6867b74b74-46xh4" [4e3e7ba3-4efd-4a9b-9682-443e9112afba] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0924 01:19:53.841128  508400 system_pods.go:89] "storage-provisioner" [7ba817a8-e447-48d8-9b19-14a59da5d457] Running
	I0924 01:19:53.841137  508400 system_pods.go:126] duration metric: took 6.718849ms to wait for k8s-apps to be running ...
	I0924 01:19:53.841150  508400 system_svc.go:44] waiting for kubelet service to be running ....
	I0924 01:19:53.841253  508400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 01:19:53.854173  508400 system_svc.go:56] duration metric: took 13.012968ms WaitForService to wait for kubelet
	I0924 01:19:53.854203  508400 kubeadm.go:582] duration metric: took 4m40.932372039s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0924 01:19:53.854224  508400 node_conditions.go:102] verifying NodePressure condition ...
	I0924 01:19:53.857708  508400 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0924 01:19:53.857738  508400 node_conditions.go:123] node cpu capacity is 2
	I0924 01:19:53.857750  508400 node_conditions.go:105] duration metric: took 3.519855ms to run NodePressure ...
	I0924 01:19:53.857761  508400 start.go:241] waiting for startup goroutines ...
	I0924 01:19:53.857768  508400 start.go:246] waiting for cluster config update ...
	I0924 01:19:53.857779  508400 start.go:255] writing updated cluster config ...
	I0924 01:19:53.858104  508400 ssh_runner.go:195] Run: rm -f paused
	I0924 01:19:53.919100  508400 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0924 01:19:53.921339  508400 out.go:177] * Done! kubectl is now configured to use "no-preload-558135" cluster and "default" namespace by default
	I0924 01:19:58.892677  503471 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 01:19:58.904580  503471 api_server.go:72] duration metric: took 5m55.910809038s to wait for apiserver process to appear ...
	I0924 01:19:58.904607  503471 api_server.go:88] waiting for apiserver healthz status ...
	I0924 01:19:58.904644  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0924 01:19:58.904701  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0924 01:19:58.944094  503471 cri.go:89] found id: "0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:58.944123  503471 cri.go:89] found id: "a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:58.944129  503471 cri.go:89] found id: ""
	I0924 01:19:58.944140  503471 logs.go:276] 2 containers: [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c]
	I0924 01:19:58.944210  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.948097  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.952102  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0924 01:19:58.952189  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0924 01:19:58.990627  503471 cri.go:89] found id: "1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:58.990651  503471 cri.go:89] found id: "4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:19:58.990657  503471 cri.go:89] found id: ""
	I0924 01:19:58.990664  503471 logs.go:276] 2 containers: [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2]
	I0924 01:19:58.990744  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.994962  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:58.998358  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0924 01:19:58.998428  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0924 01:19:59.039355  503471 cri.go:89] found id: "726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:59.039379  503471 cri.go:89] found id: "ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:59.039384  503471 cri.go:89] found id: ""
	I0924 01:19:59.039391  503471 logs.go:276] 2 containers: [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c]
	I0924 01:19:59.039451  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.043628  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.047352  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0924 01:19:59.047432  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0924 01:19:59.088932  503471 cri.go:89] found id: "11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:59.088957  503471 cri.go:89] found id: "92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:59.088963  503471 cri.go:89] found id: ""
	I0924 01:19:59.088970  503471 logs.go:276] 2 containers: [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3]
	I0924 01:19:59.089029  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.093313  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.096780  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0924 01:19:59.096850  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0924 01:19:59.137471  503471 cri.go:89] found id: "a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:59.137492  503471 cri.go:89] found id: "a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:59.137497  503471 cri.go:89] found id: ""
	I0924 01:19:59.137505  503471 logs.go:276] 2 containers: [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c]
	I0924 01:19:59.137584  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.141423  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.144785  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0924 01:19:59.144903  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0924 01:19:59.183946  503471 cri.go:89] found id: "14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:19:59.183968  503471 cri.go:89] found id: "840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:19:59.183973  503471 cri.go:89] found id: ""
	I0924 01:19:59.183980  503471 logs.go:276] 2 containers: [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55]
	I0924 01:19:59.184038  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.187604  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.191086  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0924 01:19:59.191163  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0924 01:19:59.233371  503471 cri.go:89] found id: "c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:19:59.233394  503471 cri.go:89] found id: "321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:59.233399  503471 cri.go:89] found id: ""
	I0924 01:19:59.233407  503471 logs.go:276] 2 containers: [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee]
	I0924 01:19:59.233487  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.237332  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.241220  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0924 01:19:59.241332  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0924 01:19:59.284694  503471 cri.go:89] found id: "fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:59.284770  503471 cri.go:89] found id: "fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:19:59.284783  503471 cri.go:89] found id: ""
	I0924 01:19:59.284791  503471 logs.go:276] 2 containers: [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d]
	I0924 01:19:59.284904  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.288841  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.292850  503471 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0924 01:19:59.292961  503471 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0924 01:19:59.342825  503471 cri.go:89] found id: "ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:19:59.342879  503471 cri.go:89] found id: ""
	I0924 01:19:59.342902  503471 logs.go:276] 1 containers: [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63]
	I0924 01:19:59.343028  503471 ssh_runner.go:195] Run: which crictl
	I0924 01:19:59.346892  503471 logs.go:123] Gathering logs for kindnet [321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee] ...
	I0924 01:19:59.346956  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee"
	I0924 01:19:59.390245  503471 logs.go:123] Gathering logs for storage-provisioner [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2] ...
	I0924 01:19:59.390298  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2"
	I0924 01:19:59.430145  503471 logs.go:123] Gathering logs for coredns [ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c] ...
	I0924 01:19:59.430171  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c"
	I0924 01:19:59.477526  503471 logs.go:123] Gathering logs for kube-scheduler [92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3] ...
	I0924 01:19:59.477553  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3"
	I0924 01:19:59.522254  503471 logs.go:123] Gathering logs for kube-proxy [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d] ...
	I0924 01:19:59.522285  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d"
	I0924 01:19:59.578762  503471 logs.go:123] Gathering logs for etcd [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e] ...
	I0924 01:19:59.578860  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e"
	I0924 01:19:59.621417  503471 logs.go:123] Gathering logs for kubelet ...
	I0924 01:19:59.621447  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0924 01:19:59.677632  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431464     666 reflector.go:138] object-"kube-system"/"kube-proxy-token-6n88c": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-6n88c" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.677885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431566     666 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678107  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431638     666 reflector.go:138] object-"kube-system"/"kindnet-token-jt6n9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-jt6n9" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678337  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431704     666 reflector.go:138] object-"kube-system"/"storage-provisioner-token-g5gtv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-g5gtv" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678549  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431771     666 reflector.go:138] object-"default"/"default-token-2t7hj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-2t7hj" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.678769  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.431832     666 reflector.go:138] object-"kube-system"/"metrics-server-token-dpjw8": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-dpjw8" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.679001  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433069     666 reflector.go:138] object-"kube-system"/"coredns-token-djfwt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-djfwt" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.679205  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:22 old-k8s-version-654890 kubelet[666]: E0924 01:14:22.433138     666 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-654890" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-654890' and this object
	W0924 01:19:59.686595  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:24 old-k8s-version-654890 kubelet[666]: E0924 01:14:24.244333     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.688182  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:25 old-k8s-version-654890 kubelet[666]: E0924 01:14:25.186030     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.690962  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:39 old-k8s-version-654890 kubelet[666]: E0924 01:14:39.793083     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.693368  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:51 old-k8s-version-654890 kubelet[666]: E0924 01:14:51.304991     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.693697  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:52 old-k8s-version-654890 kubelet[666]: E0924 01:14:52.318075     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.693885  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:53 old-k8s-version-654890 kubelet[666]: E0924 01:14:53.784377     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.694358  503471 logs.go:138] Found kubelet problem: Sep 24 01:14:55 old-k8s-version-654890 kubelet[666]: E0924 01:14:55.344748     666 pod_workers.go:191] Error syncing pod c12ca6a0-fd9b-45bf-9da0-2ec1193cce32 ("storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c12ca6a0-fd9b-45bf-9da0-2ec1193cce32)"
	W0924 01:19:59.695337  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:01 old-k8s-version-654890 kubelet[666]: E0924 01:15:01.366255     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.697828  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:07 old-k8s-version-654890 kubelet[666]: E0924 01:15:07.792722     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.698297  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:11 old-k8s-version-654890 kubelet[666]: E0924 01:15:11.008383     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.698482  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:20 old-k8s-version-654890 kubelet[666]: E0924 01:15:20.784619     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.699079  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:26 old-k8s-version-654890 kubelet[666]: E0924 01:15:26.452520     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.699407  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:30 old-k8s-version-654890 kubelet[666]: E0924 01:15:30.986615     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.699593  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:33 old-k8s-version-654890 kubelet[666]: E0924 01:15:33.783850     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.699929  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.783545     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.700115  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:44 old-k8s-version-654890 kubelet[666]: E0924 01:15:44.784908     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.700443  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.784164     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.702897  503471 logs.go:138] Found kubelet problem: Sep 24 01:15:59 old-k8s-version-654890 kubelet[666]: E0924 01:15:59.792225     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.703245  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:14 old-k8s-version-654890 kubelet[666]: E0924 01:16:14.789130     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.703708  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:15 old-k8s-version-654890 kubelet[666]: E0924 01:16:15.580475     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704038  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:20 old-k8s-version-654890 kubelet[666]: E0924 01:16:20.992161     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704222  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:29 old-k8s-version-654890 kubelet[666]: E0924 01:16:29.787709     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.704549  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:34 old-k8s-version-654890 kubelet[666]: E0924 01:16:34.784335     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.704734  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:42 old-k8s-version-654890 kubelet[666]: E0924 01:16:42.784427     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.705065  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:45 old-k8s-version-654890 kubelet[666]: E0924 01:16:45.783505     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.705249  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:53 old-k8s-version-654890 kubelet[666]: E0924 01:16:53.784108     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.705575  503471 logs.go:138] Found kubelet problem: Sep 24 01:16:57 old-k8s-version-654890 kubelet[666]: E0924 01:16:57.783801     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.705759  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:06 old-k8s-version-654890 kubelet[666]: E0924 01:17:06.784121     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.706092  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:11 old-k8s-version-654890 kubelet[666]: E0924 01:17:11.783481     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.706279  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:18 old-k8s-version-654890 kubelet[666]: E0924 01:17:18.783911     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.706609  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:24 old-k8s-version-654890 kubelet[666]: E0924 01:17:24.784083     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.709045  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:31 old-k8s-version-654890 kubelet[666]: E0924 01:17:31.792392     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0924 01:19:59.709642  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:39 old-k8s-version-654890 kubelet[666]: E0924 01:17:39.856284     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.709971  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:40 old-k8s-version-654890 kubelet[666]: E0924 01:17:40.985923     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.710166  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:45 old-k8s-version-654890 kubelet[666]: E0924 01:17:45.784024     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.710493  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:55 old-k8s-version-654890 kubelet[666]: E0924 01:17:55.783900     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.710677  503471 logs.go:138] Found kubelet problem: Sep 24 01:17:59 old-k8s-version-654890 kubelet[666]: E0924 01:17:59.784186     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.711009  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:06 old-k8s-version-654890 kubelet[666]: E0924 01:18:06.788365     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.711196  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:12 old-k8s-version-654890 kubelet[666]: E0924 01:18:12.783859     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.711569  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:18 old-k8s-version-654890 kubelet[666]: E0924 01:18:18.783560     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.711755  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:27 old-k8s-version-654890 kubelet[666]: E0924 01:18:27.783904     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.712084  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:30 old-k8s-version-654890 kubelet[666]: E0924 01:18:30.783887     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.712268  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:41 old-k8s-version-654890 kubelet[666]: E0924 01:18:41.784268     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.712597  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: E0924 01:18:44.783947     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.712784  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:55 old-k8s-version-654890 kubelet[666]: E0924 01:18:55.783966     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.713114  503471 logs.go:138] Found kubelet problem: Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: E0924 01:18:58.784580     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.713298  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:09 old-k8s-version-654890 kubelet[666]: E0924 01:19:09.783890     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.713628  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: E0924 01:19:11.783671     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.713812  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.714144  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.714487  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:19:59.714673  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.714857  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:19:59.715195  503471 logs.go:138] Found kubelet problem: Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	I0924 01:19:59.715208  503471 logs.go:123] Gathering logs for kube-apiserver [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f] ...
	I0924 01:19:59.715224  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f"
	I0924 01:19:59.775878  503471 logs.go:123] Gathering logs for kube-apiserver [a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c] ...
	I0924 01:19:59.775913  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c"
	I0924 01:19:59.848947  503471 logs.go:123] Gathering logs for coredns [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9] ...
	I0924 01:19:59.848982  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9"
	I0924 01:19:59.893787  503471 logs.go:123] Gathering logs for kube-scheduler [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57] ...
	I0924 01:19:59.893817  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57"
	I0924 01:19:59.934822  503471 logs.go:123] Gathering logs for kube-proxy [a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c] ...
	I0924 01:19:59.934854  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c"
	I0924 01:19:59.975700  503471 logs.go:123] Gathering logs for kube-controller-manager [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b] ...
	I0924 01:19:59.975727  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b"
	I0924 01:20:00.150686  503471 logs.go:123] Gathering logs for containerd ...
	I0924 01:20:00.150774  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0924 01:20:00.348822  503471 logs.go:123] Gathering logs for dmesg ...
	I0924 01:20:00.348953  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0924 01:20:00.421331  503471 logs.go:123] Gathering logs for describe nodes ...
	I0924 01:20:00.421432  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0924 01:20:00.860083  503471 logs.go:123] Gathering logs for etcd [4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2] ...
	I0924 01:20:00.860117  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2"
	I0924 01:20:00.933708  503471 logs.go:123] Gathering logs for kubernetes-dashboard [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63] ...
	I0924 01:20:00.933741  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63"
	I0924 01:20:00.983639  503471 logs.go:123] Gathering logs for container status ...
	I0924 01:20:00.983672  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0924 01:20:01.031385  503471 logs.go:123] Gathering logs for kube-controller-manager [840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55] ...
	I0924 01:20:01.031544  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55"
	I0924 01:20:01.119721  503471 logs.go:123] Gathering logs for kindnet [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335] ...
	I0924 01:20:01.119762  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335"
	I0924 01:20:01.189736  503471 logs.go:123] Gathering logs for storage-provisioner [fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d] ...
	I0924 01:20:01.189781  503471 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d"
	I0924 01:20:01.236887  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:20:01.236929  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0924 01:20:01.236990  503471 out.go:270] X Problems detected in kubelet:
	W0924 01:20:01.237011  503471 out.go:270]   Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:20:01.237022  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	W0924 01:20:01.237031  503471 out.go:270]   Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:20:01.237044  503471 out.go:270]   Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0924 01:20:01.237050  503471 out.go:270]   Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	I0924 01:20:01.237057  503471 out.go:358] Setting ErrFile to fd 2...
	I0924 01:20:01.237068  503471 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:20:11.238141  503471 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0924 01:20:11.249366  503471 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0924 01:20:11.251766  503471 out.go:201] 
	W0924 01:20:11.253416  503471 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0924 01:20:11.253457  503471 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0924 01:20:11.253478  503471 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0924 01:20:11.253487  503471 out.go:270] * 
	W0924 01:20:11.254517  503471 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0924 01:20:11.256620  503471 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	dbc349201d122       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   88126bedeb6e3       dashboard-metrics-scraper-8d5bb5db8-99rbp
	fb460cedc7032       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         3                   5e24378a3bb45       storage-provisioner
	ffc127edc7c13       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   f06d76e7c81f4       kubernetes-dashboard-cd95d586-h24mv
	726b9b637e708       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   5c0836d67030d       coredns-74ff55c5b-4h5vb
	a3efc4d0adcbc       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   2971d9181326c       kube-proxy-dctnp
	c1d6f4b139eb3       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   bb0ecc9201018       kindnet-nlwr9
	c9d42857f2854       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   08ed38edff5f4       busybox
	fd25616712b12       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         2                   5e24378a3bb45       storage-provisioner
	11cd4f142a26c       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   e667d66c4f0fa       kube-scheduler-old-k8s-version-654890
	14974f31e17ef       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   e54d68aa9f6cf       kube-controller-manager-old-k8s-version-654890
	1a02c274b765e       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   b7520c800888c       etcd-old-k8s-version-654890
	0d53a8585147b       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   d4238bde7ae3f       kube-apiserver-old-k8s-version-654890
	ab7e937daae24       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   1739a553cdd41       busybox
	ef14acfdf46ff       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   acd6649c873a8       coredns-74ff55c5b-4h5vb
	321cc991e8aee       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   d54bd258985f3       kindnet-nlwr9
	a216776df4a76       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   e7534ab746538       kube-proxy-dctnp
	a3f73fa00d878       2c08bbbc02d3a       9 minutes ago       Exited              kube-apiserver              0                   1cc1f10e426da       kube-apiserver-old-k8s-version-654890
	92e903457a976       e7605f88f17d6       9 minutes ago       Exited              kube-scheduler              0                   c65917054c52e       kube-scheduler-old-k8s-version-654890
	840c0a636700f       1df8a2b116bd1       9 minutes ago       Exited              kube-controller-manager     0                   e4410cb979bab       kube-controller-manager-old-k8s-version-654890
	4b3e46c31d87b       05b738aa1bc63       9 minutes ago       Exited              etcd                        0                   ee5ed1b99e768       etcd-old-k8s-version-654890
	
	
	==> containerd <==
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.819256262Z" level=info msg="CreateContainer within sandbox \"88126bedeb6e3b952fa5fb4dcaba221d7815e6fb9331135f05a02c2901fd56e2\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b\""
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.821771010Z" level=info msg="StartContainer for \"673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b\""
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.891316541Z" level=info msg="StartContainer for \"673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b\" returns successfully"
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.924728953Z" level=info msg="shim disconnected" id=673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b namespace=k8s.io
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.924951181Z" level=warning msg="cleaning up after shim disconnected" id=673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b namespace=k8s.io
	Sep 24 01:16:14 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:14.925030984Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 24 01:16:15 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:15.600267470Z" level=info msg="RemoveContainer for \"b4d87e032e2da4676d5aad7b4fee7e24821a7bc42626369c5fefd58fc67981e2\""
	Sep 24 01:16:15 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:16:15.608576139Z" level=info msg="RemoveContainer for \"b4d87e032e2da4676d5aad7b4fee7e24821a7bc42626369c5fefd58fc67981e2\" returns successfully"
	Sep 24 01:17:31 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:31.784472043Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:17:31 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:31.789911194Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 24 01:17:31 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:31.791711644Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 24 01:17:31 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:31.791761802Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.786893937Z" level=info msg="CreateContainer within sandbox \"88126bedeb6e3b952fa5fb4dcaba221d7815e6fb9331135f05a02c2901fd56e2\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.803529502Z" level=info msg="CreateContainer within sandbox \"88126bedeb6e3b952fa5fb4dcaba221d7815e6fb9331135f05a02c2901fd56e2\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8\""
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.804212251Z" level=info msg="StartContainer for \"dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8\""
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.894837484Z" level=info msg="StartContainer for \"dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8\" returns successfully"
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.921121806Z" level=info msg="shim disconnected" id=dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8 namespace=k8s.io
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.921188227Z" level=warning msg="cleaning up after shim disconnected" id=dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8 namespace=k8s.io
	Sep 24 01:17:38 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:38.921201240Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 24 01:17:39 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:39.853280411Z" level=info msg="RemoveContainer for \"673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b\""
	Sep 24 01:17:39 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:17:39.859426097Z" level=info msg="RemoveContainer for \"673c98d5ea337644569e7b6b17b7eff8df8749f9704e79eae5f49d2c87f5a07b\" returns successfully"
	Sep 24 01:20:12 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:20:12.786365040Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:20:12 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:20:12.801404917Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 24 01:20:12 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:20:12.803421250Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 24 01:20:12 old-k8s-version-654890 containerd[573]: time="2024-09-24T01:20:12.803537459Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [726b9b637e70873b8dcf15fbdf3dcedddcb9cd6aefaf87948d0de9b3e24e0da9] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:43974 - 30852 "HINFO IN 7371709559352309961.4617567100753704921. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01302635s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0924 01:14:56.446636       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-24 01:14:26.446076485 +0000 UTC m=+0.047964985) (total time: 30.000445822s):
	Trace[2019727887]: [30.000445822s] [30.000445822s] END
	E0924 01:14:56.446675       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0924 01:14:56.446954       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-24 01:14:26.44661475 +0000 UTC m=+0.048503250) (total time: 30.000278749s):
	Trace[939984059]: [30.000278749s] [30.000278749s] END
	E0924 01:14:56.446998       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0924 01:14:56.447557       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-24 01:14:26.44684113 +0000 UTC m=+0.048729622) (total time: 30.000692755s):
	Trace[911902081]: [30.000692755s] [30.000692755s] END
	E0924 01:14:56.447578       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ef14acfdf46ffeb7940e8ac4eaff81bfbeec196a22051bc2cb550f636b541f4c] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:34331 - 53042 "HINFO IN 8439755339388553706.260013902647775604. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.041521832s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-654890
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-654890
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=41795ff643dcbe39cdf81f27d064464d20ae8e7c
	                    minikube.k8s.io/name=old-k8s-version-654890
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_24T01_11_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 24 Sep 2024 01:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-654890
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 24 Sep 2024 01:20:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 24 Sep 2024 01:15:12 +0000   Tue, 24 Sep 2024 01:11:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 24 Sep 2024 01:15:12 +0000   Tue, 24 Sep 2024 01:11:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 24 Sep 2024 01:15:12 +0000   Tue, 24 Sep 2024 01:11:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 24 Sep 2024 01:15:12 +0000   Tue, 24 Sep 2024 01:11:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-654890
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3dde889bde1140d582de9eb21e00c1b1
	  System UUID:                e3111167-6378-4792-ab33-07385a8ccf74
	  Boot ID:                    e579fd69-d9d0-4441-8d26-00b8ee3b7574
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m44s
	  kube-system                 coredns-74ff55c5b-4h5vb                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m40s
	  kube-system                 etcd-old-k8s-version-654890                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m47s
	  kube-system                 kindnet-nlwr9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m40s
	  kube-system                 kube-apiserver-old-k8s-version-654890             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 kube-controller-manager-old-k8s-version-654890    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 kube-proxy-dctnp                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m40s
	  kube-system                 kube-scheduler-old-k8s-version-654890             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m47s
	  kube-system                 metrics-server-9975d5f86-5qvnr                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m33s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-99rbp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-h24mv               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  9m7s (x4 over 9m7s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m7s (x4 over 9m7s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m7s (x4 over 9m7s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m48s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m48s                kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m48s                kubelet     Node old-k8s-version-654890 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m48s                kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m47s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m40s                kubelet     Node old-k8s-version-654890 status is now: NodeReady
	  Normal  Starting                 8m38s                kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m3s                 kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m3s (x8 over 6m3s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x8 over 6m3s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x7 over 6m3s)  kubelet     Node old-k8s-version-654890 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m3s                 kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep23 23:56] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep24 00:45] hrtimer: interrupt took 5498017 ns
	
	
	==> etcd [1a02c274b765eec9b703f23f5251e302342e1238e7833b5bdd4c5284939dba4e] <==
	2024-09-24 01:16:09.668255 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:16:19.668137 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:16:29.667959 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:16:39.668081 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:16:49.667891 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:16:59.668158 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:09.668114 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:19.668094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:29.668046 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:39.667997 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:49.668415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:17:59.668303 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:09.668035 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:19.668043 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:29.668094 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:39.668026 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:49.667943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:18:59.668103 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:09.667943 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:19.668030 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:29.668136 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:39.667924 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:49.668105 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:19:59.668131 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:20:09.668354 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [4b3e46c31d87b9f4360a7d001b62bfbec9eaeb7fac41c03c7afdcf345bdc85d2] <==
	raft2024/09/24 01:11:07 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/24 01:11:07 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-24 01:11:07.631231 I | etcdserver: published {Name:old-k8s-version-654890 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-24 01:11:07.631941 I | embed: ready to serve client requests
	2024-09-24 01:11:07.642711 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-24 01:11:07.642838 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-24 01:11:07.643046 I | embed: ready to serve client requests
	2024-09-24 01:11:07.644906 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-24 01:11:07.646991 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-24 01:11:07.647213 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-24 01:11:16.972706 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:11:27.839879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:11:35.734995 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:11:45.735657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:11:55.735468 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:05.736045 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:15.735067 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:25.734887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:35.735082 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:45.735076 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:12:55.735061 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:13:05.736010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:13:15.734974 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:13:25.734860 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-24 01:13:35.735108 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 01:20:13 up  3:02,  0 users,  load average: 0.58, 1.68, 2.41
	Linux old-k8s-version-654890 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [321cc991e8aee83030aaf0fd4a0169e888f5b32b23247e48167abb3444235dee] <==
	I0924 01:11:37.005177       1 controller.go:374] Syncing nftables rules
	I0924 01:11:46.804422       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:11:46.804459       1 main.go:299] handling current node
	I0924 01:11:56.804261       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:11:56.804298       1 main.go:299] handling current node
	I0924 01:12:06.803690       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:06.803757       1 main.go:299] handling current node
	I0924 01:12:16.811019       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:16.811053       1 main.go:299] handling current node
	I0924 01:12:26.810733       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:26.810768       1 main.go:299] handling current node
	I0924 01:12:36.803693       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:36.803728       1 main.go:299] handling current node
	I0924 01:12:46.809427       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:46.809464       1 main.go:299] handling current node
	I0924 01:12:56.803684       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:12:56.803724       1 main.go:299] handling current node
	I0924 01:13:06.811086       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:13:06.811122       1 main.go:299] handling current node
	I0924 01:13:16.812772       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:13:16.812831       1 main.go:299] handling current node
	I0924 01:13:26.805061       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:13:26.805101       1 main.go:299] handling current node
	I0924 01:13:36.804067       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:13:36.804100       1 main.go:299] handling current node
	
	
	==> kindnet [c1d6f4b139eb39cb24db27d8618300757f678faca88fa20d24e066fa8a75c335] <==
	I0924 01:18:05.624256       1 main.go:299] handling current node
	I0924 01:18:15.626998       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:18:15.627191       1 main.go:299] handling current node
	I0924 01:18:25.620358       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:18:25.620396       1 main.go:299] handling current node
	I0924 01:18:35.624003       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:18:35.624035       1 main.go:299] handling current node
	I0924 01:18:45.629084       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:18:45.629194       1 main.go:299] handling current node
	I0924 01:18:55.629455       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:18:55.629550       1 main.go:299] handling current node
	I0924 01:19:05.627281       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:05.627398       1 main.go:299] handling current node
	I0924 01:19:15.629337       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:15.629437       1 main.go:299] handling current node
	I0924 01:19:25.620904       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:25.620948       1 main.go:299] handling current node
	I0924 01:19:35.627385       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:35.627434       1 main.go:299] handling current node
	I0924 01:19:45.623070       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:45.623120       1 main.go:299] handling current node
	I0924 01:19:55.627180       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:19:55.627299       1 main.go:299] handling current node
	I0924 01:20:05.627387       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0924 01:20:05.627487       1 main.go:299] handling current node
	
	
	==> kube-apiserver [0d53a8585147b008b66d8a2452e78b09c653dbd3ce70d1d015e55f15fe5bc97f] <==
	I0924 01:16:50.075246       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:16:50.075257       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0924 01:17:24.884591       1 handler_proxy.go:102] no RequestInfo found in the context
	E0924 01:17:24.884691       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0924 01:17:24.884712       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:17:32.916702       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:17:32.916772       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:17:32.916782       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0924 01:18:09.335315       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:18:09.335361       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:18:09.335545       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0924 01:18:52.999123       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:18:52.999170       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:18:52.999182       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0924 01:19:23.456574       1 handler_proxy.go:102] no RequestInfo found in the context
	E0924 01:19:23.456655       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0924 01:19:23.456664       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0924 01:19:27.215933       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:19:27.215978       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:19:27.215987       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0924 01:20:01.868907       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:20:01.868951       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:20:01.868959       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [a3f73fa00d8780bfbae087dd100b5dea468ee8262086a416cf2f1a32baf45e2c] <==
	I0924 01:11:15.342478       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0924 01:11:15.342561       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0924 01:11:15.371413       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0924 01:11:15.378448       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0924 01:11:15.378532       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0924 01:11:15.890404       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0924 01:11:15.939347       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0924 01:11:16.098730       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0924 01:11:16.100105       1 controller.go:606] quota admission added evaluator for: endpoints
	I0924 01:11:16.106707       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0924 01:11:17.073923       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0924 01:11:17.440540       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0924 01:11:17.498620       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0924 01:11:25.899471       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0924 01:11:33.171121       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0924 01:11:33.172351       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0924 01:11:53.705296       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:11:53.705343       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:11:53.705496       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0924 01:12:29.944745       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:12:29.944958       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:12:29.944976       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0924 01:13:08.100452       1 client.go:360] parsed scheme: "passthrough"
	I0924 01:13:08.100496       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0924 01:13:08.100505       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [14974f31e17ef863e970aac9ec385dd3a7df85dabded10c6c4158fa5ae8bb04b] <==
	W0924 01:15:45.719178       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:16:12.835620       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:16:17.369809       1 request.go:655] Throttling request took 1.048420999s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1?timeout=32s
	W0924 01:16:18.221443       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:16:43.337760       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:16:49.872066       1 request.go:655] Throttling request took 1.048345831s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1beta1?timeout=32s
	W0924 01:16:50.724123       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:17:13.839739       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:17:22.374644       1 request.go:655] Throttling request took 1.048419792s, request: GET:https://192.168.76.2:8443/apis/admissionregistration.k8s.io/v1?timeout=32s
	W0924 01:17:23.226115       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:17:44.341746       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:17:54.876511       1 request.go:655] Throttling request took 1.048387215s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W0924 01:17:55.728086       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:18:14.843877       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:18:27.378520       1 request.go:655] Throttling request took 1.048423036s, request: GET:https://192.168.76.2:8443/apis/batch/v1?timeout=32s
	W0924 01:18:28.229934       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:18:45.346488       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:18:59.880377       1 request.go:655] Throttling request took 1.048485652s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0924 01:19:00.731947       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:19:15.848415       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:19:32.382500       1 request.go:655] Throttling request took 1.04822641s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0924 01:19:33.233807       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0924 01:19:46.350672       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0924 01:20:04.884268       1 request.go:655] Throttling request took 1.048447779s, request: GET:https://192.168.76.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
	W0924 01:20:05.735757       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [840c0a636700f21e186c2b7d0a495ae8ba333ce02654b24940c178384884ac55] <==
	E0924 01:11:33.103429       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	I0924 01:11:33.140840       1 shared_informer.go:247] Caches are synced for stateful set 
	I0924 01:11:33.163609       1 shared_informer.go:247] Caches are synced for deployment 
	I0924 01:11:33.165375       1 shared_informer.go:247] Caches are synced for daemon sets 
	I0924 01:11:33.176363       1 shared_informer.go:247] Caches are synced for disruption 
	I0924 01:11:33.176536       1 disruption.go:339] Sending events to api server.
	I0924 01:11:33.195077       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
	I0924 01:11:33.200884       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-dtp6w"
	I0924 01:11:33.227585       1 shared_informer.go:247] Caches are synced for resource quota 
	I0924 01:11:33.241250       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-nlwr9"
	I0924 01:11:33.241783       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dctnp"
	I0924 01:11:33.242288       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4h5vb"
	I0924 01:11:33.267428       1 shared_informer.go:247] Caches are synced for resource quota 
	E0924 01:11:33.298134       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f33e8adb-f65a-4a76-bfa6-5d096706451a", ResourceVersion:"249", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862737077, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4001701dc0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4001701de0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4001701e00), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40018c0740), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001701
e20), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4001701e40), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4001701e80)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40016e8fc0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40019e0718), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004cc460), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40002fbbc8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40019e0768)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0924 01:11:33.390937       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0924 01:11:33.691120       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0924 01:11:33.716924       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0924 01:11:33.716952       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0924 01:11:34.428361       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0924 01:11:34.462041       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-dtp6w"
	I0924 01:11:38.043649       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0924 01:13:39.799889       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	I0924 01:13:39.836822       1 event.go:291] "Event occurred" object="kube-system/metrics-server-9975d5f86" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-9975d5f86-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found"
	E0924 01:13:39.887978       1 replica_set.go:532] sync "kube-system/metrics-server-9975d5f86" failed with pods "metrics-server-9975d5f86-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found
	E0924 01:13:39.960422       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [a216776df4a7683d11f652ba6188ed06bc8f4d2aca1cfdfd75fd8caa734d5f7c] <==
	I0924 01:11:35.486972       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0924 01:11:35.487060       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0924 01:11:35.517765       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0924 01:11:35.517859       1 server_others.go:185] Using iptables Proxier.
	I0924 01:11:35.518069       1 server.go:650] Version: v1.20.0
	I0924 01:11:35.518998       1 config.go:315] Starting service config controller
	I0924 01:11:35.519013       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0924 01:11:35.519031       1 config.go:224] Starting endpoint slice config controller
	I0924 01:11:35.519035       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0924 01:11:35.623506       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0924 01:11:35.623629       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-proxy [a3efc4d0adcbc5ed4559be7029923280ee64507be7e1478e828a19882024959d] <==
	I0924 01:14:26.465975       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0924 01:14:26.466242       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0924 01:14:26.484078       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0924 01:14:26.484174       1 server_others.go:185] Using iptables Proxier.
	I0924 01:14:26.484395       1 server.go:650] Version: v1.20.0
	I0924 01:14:26.485221       1 config.go:315] Starting service config controller
	I0924 01:14:26.485243       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0924 01:14:26.485263       1 config.go:224] Starting endpoint slice config controller
	I0924 01:14:26.485267       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0924 01:14:26.585413       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	I0924 01:14:26.585424       1 shared_informer.go:247] Caches are synced for service config 
	
	
	==> kube-scheduler [11cd4f142a26cd68a385f7146deef4ee88d1b79b18a062cef614b19d4789bc57] <==
	I0924 01:14:17.021499       1 serving.go:331] Generated self-signed cert in-memory
	W0924 01:14:22.381067       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 01:14:22.381109       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 01:14:22.381129       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 01:14:22.381137       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 01:14:22.535691       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0924 01:14:22.543705       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:14:22.543747       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:14:22.543766       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0924 01:14:22.647341       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [92e903457a9761a7af4f917c69d759a5da333110e45a058d985445733540fbd3] <==
	W0924 01:11:14.579894       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0924 01:11:14.580118       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0924 01:11:14.580206       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0924 01:11:14.580279       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0924 01:11:14.625550       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0924 01:11:14.625720       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:11:14.625804       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0924 01:11:14.625975       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0924 01:11:14.632615       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0924 01:11:14.632730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0924 01:11:14.643004       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0924 01:11:14.643632       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0924 01:11:14.643977       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0924 01:11:14.644258       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0924 01:11:14.644497       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 01:11:14.645415       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0924 01:11:14.645533       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:11:14.645871       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0924 01:11:14.646118       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 01:11:14.647374       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0924 01:11:15.540508       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0924 01:11:15.597715       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0924 01:11:15.627226       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0924 01:11:15.643360       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0924 01:11:16.226097       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 24 01:18:41 old-k8s-version-654890 kubelet[666]: E0924 01:18:41.784268     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: I0924 01:18:44.783165     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:18:44 old-k8s-version-654890 kubelet[666]: E0924 01:18:44.783947     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:18:55 old-k8s-version-654890 kubelet[666]: E0924 01:18:55.783966     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: I0924 01:18:58.783801     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:18:58 old-k8s-version-654890 kubelet[666]: E0924 01:18:58.784580     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:19:09 old-k8s-version-654890 kubelet[666]: E0924 01:19:09.783890     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: I0924 01:19:11.783287     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:19:11 old-k8s-version-654890 kubelet[666]: E0924 01:19:11.783671     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:19:20 old-k8s-version-654890 kubelet[666]: E0924 01:19:20.784316     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: I0924 01:19:23.783172     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:19:23 old-k8s-version-654890 kubelet[666]: E0924 01:19:23.783518     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: I0924 01:19:34.787670     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.788045     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:19:34 old-k8s-version-654890 kubelet[666]: E0924 01:19:34.792957     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:19:45 old-k8s-version-654890 kubelet[666]: E0924 01:19:45.783770     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: I0924 01:19:49.783155     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:19:49 old-k8s-version-654890 kubelet[666]: E0924 01:19:49.783506     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:19:59 old-k8s-version-654890 kubelet[666]: E0924 01:19:59.784017     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 24 01:20:01 old-k8s-version-654890 kubelet[666]: I0924 01:20:01.783191     666 scope.go:95] [topologymanager] RemoveContainer - Container ID: dbc349201d12250e06795b49e35b576fac970cbc68e51834bf8c0b6b6e3ec5b8
	Sep 24 01:20:01 old-k8s-version-654890 kubelet[666]: E0924 01:20:01.783590     666 pod_workers.go:191] Error syncing pod a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b ("dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-99rbp_kubernetes-dashboard(a14eb15f-c6ee-4a9e-ad4e-9b64b2037c7b)"
	Sep 24 01:20:12 old-k8s-version-654890 kubelet[666]: E0924 01:20:12.803702     666 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 24 01:20:12 old-k8s-version-654890 kubelet[666]: E0924 01:20:12.803757     666 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 24 01:20:12 old-k8s-version-654890 kubelet[666]: E0924 01:20:12.803914     666 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-dpjw8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-5qvnr_kube-system(38d3926
0-494a-4760-91bb-091e6afba5ca): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 24 01:20:12 old-k8s-version-654890 kubelet[666]: E0924 01:20:12.803962     666 pod_workers.go:191] Error syncing pod 38d39260-494a-4760-91bb-091e6afba5ca ("metrics-server-9975d5f86-5qvnr_kube-system(38d39260-494a-4760-91bb-091e6afba5ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [ffc127edc7c13cabaa59d04ea0a177c94e9e6bf5bfccbfda48fd77224f966e63] <==
	2024/09/24 01:14:44 Using namespace: kubernetes-dashboard
	2024/09/24 01:14:44 Using in-cluster config to connect to apiserver
	2024/09/24 01:14:44 Using secret token for csrf signing
	2024/09/24 01:14:44 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/24 01:14:44 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/24 01:14:44 Successful initial request to the apiserver, version: v1.20.0
	2024/09/24 01:14:44 Generating JWE encryption key
	2024/09/24 01:14:44 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/24 01:14:44 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/24 01:14:45 Initializing JWE encryption key from synchronized object
	2024/09/24 01:14:45 Creating in-cluster Sidecar client
	2024/09/24 01:14:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:14:45 Serving insecurely on HTTP port: 9090
	2024/09/24 01:15:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:15:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:16:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:16:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:17:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:17:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:18:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:18:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:19:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:19:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/24 01:14:44 Starting overwatch
	
	
	==> storage-provisioner [fb460cedc70328f07448f754198dcea04f2edccb76c1ffe5b7a6a0942e34a3f2] <==
	I0924 01:15:10.007854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0924 01:15:10.027208       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0924 01:15:10.030010       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0924 01:15:27.515761       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0924 01:15:27.517783       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-654890_6e1f6478-2303-410e-bcb2-e9fddb38566d!
	I0924 01:15:27.518553       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1c3ff6ee-6fcf-4f7b-8635-09c423e83901", APIVersion:"v1", ResourceVersion:"859", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-654890_6e1f6478-2303-410e-bcb2-e9fddb38566d became leader
	I0924 01:15:27.619292       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-654890_6e1f6478-2303-410e-bcb2-e9fddb38566d!
	
	
	==> storage-provisioner [fd25616712b12bdb0e36c34f8d44469da4cb256070f158c54dc08e1b4037a13d] <==
	I0924 01:14:24.269942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0924 01:14:54.272040       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-654890 -n old-k8s-version-654890
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-654890 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-5qvnr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-654890 describe pod metrics-server-9975d5f86-5qvnr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-654890 describe pod metrics-server-9975d5f86-5qvnr: exit status 1 (106.888759ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-5qvnr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-654890 describe pod metrics-server-9975d5f86-5qvnr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (381.67s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.69
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.89
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.23
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 214
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 14.97
34 TestAddons/parallel/Ingress 19.99
35 TestAddons/parallel/InspektorGadget 11.09
36 TestAddons/parallel/MetricsServer 5.98
38 TestAddons/parallel/CSI 40.32
39 TestAddons/parallel/Headlamp 15.95
40 TestAddons/parallel/CloudSpanner 5.57
41 TestAddons/parallel/LocalPath 52.85
42 TestAddons/parallel/NvidiaDevicePlugin 6.55
43 TestAddons/parallel/Yakd 11.82
44 TestAddons/StoppedEnableDisable 12.28
45 TestCertOptions 41.71
46 TestCertExpiration 231.81
48 TestForceSystemdFlag 36.93
49 TestForceSystemdEnv 35.81
50 TestDockerEnvContainerd 43.74
55 TestErrorSpam/setup 29.79
56 TestErrorSpam/start 0.71
57 TestErrorSpam/status 1.07
58 TestErrorSpam/pause 1.8
59 TestErrorSpam/unpause 1.91
60 TestErrorSpam/stop 1.46
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 80.16
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.5
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.34
72 TestFunctional/serial/CacheCmd/cache/add_local 1.27
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
76 TestFunctional/serial/CacheCmd/cache/cache_reload 2.09
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
80 TestFunctional/serial/ExtraConfig 46.37
81 TestFunctional/serial/ComponentHealth 0.1
82 TestFunctional/serial/LogsCmd 1.71
83 TestFunctional/serial/LogsFileCmd 1.73
84 TestFunctional/serial/InvalidService 5.28
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 12.26
88 TestFunctional/parallel/DryRun 0.42
89 TestFunctional/parallel/InternationalLanguage 0.18
90 TestFunctional/parallel/StatusCmd 1.04
94 TestFunctional/parallel/ServiceCmdConnect 10.59
95 TestFunctional/parallel/AddonsCmd 0.21
96 TestFunctional/parallel/PersistentVolumeClaim 23.15
98 TestFunctional/parallel/SSHCmd 0.65
99 TestFunctional/parallel/CpCmd 2.27
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.05
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.77
110 TestFunctional/parallel/License 0.3
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.62
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 6.23
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
124 TestFunctional/parallel/ProfileCmd/profile_list 0.41
125 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
126 TestFunctional/parallel/MountCmd/any-port 8.35
127 TestFunctional/parallel/ServiceCmd/List 0.73
128 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
130 TestFunctional/parallel/ServiceCmd/Format 0.38
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/MountCmd/specific-port 1.9
133 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
134 TestFunctional/parallel/Version/short 0.07
135 TestFunctional/parallel/Version/components 1.29
136 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
137 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
138 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
139 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
140 TestFunctional/parallel/ImageCommands/ImageBuild 3.76
141 TestFunctional/parallel/ImageCommands/Setup 0.74
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
143 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.09
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
146 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.78
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 133.29
159 TestMultiControlPlane/serial/DeployApp 35.53
160 TestMultiControlPlane/serial/PingHostFromPods 1.85
161 TestMultiControlPlane/serial/AddWorkerNode 23.11
162 TestMultiControlPlane/serial/NodeLabels 0.12
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1
164 TestMultiControlPlane/serial/CopyFile 18.92
165 TestMultiControlPlane/serial/StopSecondaryNode 13.09
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
167 TestMultiControlPlane/serial/RestartSecondaryNode 29.11
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.07
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 123.34
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.38
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.74
172 TestMultiControlPlane/serial/StopCluster 36.65
173 TestMultiControlPlane/serial/RestartCluster 76.18
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
175 TestMultiControlPlane/serial/AddSecondaryNode 45.87
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
180 TestJSONOutput/start/Command 51.39
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.73
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.66
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.88
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.23
205 TestKicCustomNetwork/create_custom_network 40.71
206 TestKicCustomNetwork/use_default_bridge_network 32.51
207 TestKicExistingNetwork 35.92
208 TestKicCustomSubnet 33.61
209 TestKicStaticIP 34.09
210 TestMainNoArgs 0.06
211 TestMinikubeProfile 68.52
214 TestMountStart/serial/StartWithMountFirst 6.69
215 TestMountStart/serial/VerifyMountFirst 0.25
216 TestMountStart/serial/StartWithMountSecond 6.09
217 TestMountStart/serial/VerifyMountSecond 0.26
218 TestMountStart/serial/DeleteFirst 1.62
219 TestMountStart/serial/VerifyMountPostDelete 0.25
220 TestMountStart/serial/Stop 1.21
221 TestMountStart/serial/RestartStopped 7.8
222 TestMountStart/serial/VerifyMountPostStop 0.27
225 TestMultiNode/serial/FreshStart2Nodes 66.65
226 TestMultiNode/serial/DeployApp2Nodes 20.99
227 TestMultiNode/serial/PingHostFrom2Pods 1.02
228 TestMultiNode/serial/AddNode 18.1
229 TestMultiNode/serial/MultiNodeLabels 0.1
230 TestMultiNode/serial/ProfileList 0.69
231 TestMultiNode/serial/CopyFile 9.85
232 TestMultiNode/serial/StopNode 2.28
233 TestMultiNode/serial/StartAfterStop 9.6
234 TestMultiNode/serial/RestartKeepsNodes 102.69
235 TestMultiNode/serial/DeleteNode 5.73
236 TestMultiNode/serial/StopMultiNode 24.02
237 TestMultiNode/serial/RestartMultiNode 53.98
238 TestMultiNode/serial/ValidateNameConflict 34.85
243 TestPreload 114.41
245 TestScheduledStopUnix 107.66
248 TestInsufficientStorage 10.27
249 TestRunningBinaryUpgrade 97.52
251 TestKubernetesUpgrade 107.7
252 TestMissingContainerUpgrade 180.36
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
255 TestNoKubernetes/serial/StartWithK8s 39.32
256 TestNoKubernetes/serial/StartWithStopK8s 17.77
257 TestNoKubernetes/serial/Start 8.39
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.33
259 TestNoKubernetes/serial/ProfileList 1.19
260 TestNoKubernetes/serial/Stop 1.26
261 TestNoKubernetes/serial/StartNoArgs 7.96
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.43
263 TestStoppedBinaryUpgrade/Setup 1.02
264 TestStoppedBinaryUpgrade/Upgrade 118.13
273 TestPause/serial/Start 100.71
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
282 TestNetworkPlugins/group/false 3.66
286 TestPause/serial/SecondStartNoReconfiguration 7.6
287 TestPause/serial/Pause 0.88
288 TestPause/serial/VerifyStatus 0.35
289 TestPause/serial/Unpause 1.11
290 TestPause/serial/PauseAgain 1.14
291 TestPause/serial/DeletePaused 2.73
292 TestPause/serial/VerifyDeletedResources 0.45
294 TestStartStop/group/old-k8s-version/serial/FirstStart 179.36
296 TestStartStop/group/no-preload/serial/FirstStart 73.85
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.98
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.43
299 TestStartStop/group/old-k8s-version/serial/Stop 13.63
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
302 TestStartStop/group/no-preload/serial/DeployApp 9.48
303 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
304 TestStartStop/group/no-preload/serial/Stop 12.17
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
306 TestStartStop/group/no-preload/serial/SecondStart 289.26
307 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.03
308 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
309 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
310 TestStartStop/group/no-preload/serial/Pause 3.07
312 TestStartStop/group/embed-certs/serial/FirstStart 85.08
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.18
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.33
316 TestStartStop/group/old-k8s-version/serial/Pause 3.7
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 67.14
319 TestStartStop/group/embed-certs/serial/DeployApp 9.35
320 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.36
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
322 TestStartStop/group/embed-certs/serial/Stop 12.11
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.03
324 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.06
325 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
326 TestStartStop/group/embed-certs/serial/SecondStart 271.64
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
328 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272.43
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.17
333 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
334 TestStartStop/group/embed-certs/serial/Pause 3.22
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.96
338 TestStartStop/group/newest-cni/serial/FirstStart 45.92
339 TestNetworkPlugins/group/auto/Start 91.65
340 TestStartStop/group/newest-cni/serial/DeployApp 0
341 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.35
342 TestStartStop/group/newest-cni/serial/Stop 1.33
343 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.31
344 TestStartStop/group/newest-cni/serial/SecondStart 17
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
348 TestStartStop/group/newest-cni/serial/Pause 3.07
349 TestNetworkPlugins/group/kindnet/Start 61.6
350 TestNetworkPlugins/group/auto/KubeletFlags 0.34
351 TestNetworkPlugins/group/auto/NetCatPod 9.46
352 TestNetworkPlugins/group/auto/DNS 0.21
353 TestNetworkPlugins/group/auto/Localhost 0.18
354 TestNetworkPlugins/group/auto/HairPin 0.16
355 TestNetworkPlugins/group/calico/Start 72.43
356 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
358 TestNetworkPlugins/group/kindnet/NetCatPod 9.33
359 TestNetworkPlugins/group/kindnet/DNS 0.22
360 TestNetworkPlugins/group/kindnet/Localhost 0.22
361 TestNetworkPlugins/group/kindnet/HairPin 0.19
362 TestNetworkPlugins/group/custom-flannel/Start 57.13
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.34
365 TestNetworkPlugins/group/calico/NetCatPod 10.42
366 TestNetworkPlugins/group/calico/DNS 0.21
367 TestNetworkPlugins/group/calico/Localhost 0.18
368 TestNetworkPlugins/group/calico/HairPin 0.33
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
371 TestNetworkPlugins/group/enable-default-cni/Start 82.05
372 TestNetworkPlugins/group/custom-flannel/DNS 0.28
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
375 TestNetworkPlugins/group/flannel/Start 53.25
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.28
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
380 TestNetworkPlugins/group/flannel/NetCatPod 10.28
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
384 TestNetworkPlugins/group/flannel/DNS 0.19
385 TestNetworkPlugins/group/flannel/Localhost 0.2
386 TestNetworkPlugins/group/flannel/HairPin 0.19
387 TestNetworkPlugins/group/bridge/Start 74.12
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
389 TestNetworkPlugins/group/bridge/NetCatPod 9.25
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.14
392 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (14.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-142004 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-142004 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (14.692664382s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0924 00:24:03.455540  301711 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0924 00:24:03.455623  301711 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-142004
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-142004: exit status 85 (71.133136ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-142004 | jenkins | v1.34.0 | 24 Sep 24 00:23 UTC |          |
	|         | -p download-only-142004        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:23:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:23:48.804085  301716 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:23:48.804291  301716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:23:48.804319  301716 out.go:358] Setting ErrFile to fd 2...
	I0924 00:23:48.804343  301716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:23:48.804606  301716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	W0924 00:23:48.804768  301716 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19696-296322/.minikube/config/config.json: open /home/jenkins/minikube-integration/19696-296322/.minikube/config/config.json: no such file or directory
	I0924 00:23:48.805205  301716 out.go:352] Setting JSON to true
	I0924 00:23:48.806104  301716 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7574,"bootTime":1727129855,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 00:23:48.806210  301716 start.go:139] virtualization:  
	I0924 00:23:48.809944  301716 out.go:97] [download-only-142004] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0924 00:23:48.810128  301716 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball: no such file or directory
	I0924 00:23:48.810178  301716 notify.go:220] Checking for updates...
	I0924 00:23:48.812411  301716 out.go:169] MINIKUBE_LOCATION=19696
	I0924 00:23:48.814651  301716 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:23:48.816449  301716 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:23:48.818604  301716 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 00:23:48.820379  301716 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 00:23:48.825423  301716 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 00:23:48.825726  301716 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:23:48.847576  301716 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 00:23:48.847693  301716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:23:48.907495  301716 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 00:23:48.897069627 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:23:48.907608  301716 docker.go:318] overlay module found
	I0924 00:23:48.909599  301716 out.go:97] Using the docker driver based on user configuration
	I0924 00:23:48.909632  301716 start.go:297] selected driver: docker
	I0924 00:23:48.909640  301716 start.go:901] validating driver "docker" against <nil>
	I0924 00:23:48.909763  301716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:23:48.955604  301716 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 00:23:48.946460143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:23:48.955819  301716 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:23:48.956087  301716 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 00:23:48.956243  301716 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 00:23:48.958538  301716 out.go:169] Using Docker driver with root privileges
	I0924 00:23:48.960362  301716 cni.go:84] Creating CNI manager for ""
	I0924 00:23:48.960430  301716 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 00:23:48.960459  301716 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 00:23:48.960552  301716 start.go:340] cluster config:
	{Name:download-only-142004 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-142004 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:23:48.962602  301716 out.go:97] Starting "download-only-142004" primary control-plane node in "download-only-142004" cluster
	I0924 00:23:48.962626  301716 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 00:23:48.964357  301716 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 00:23:48.964386  301716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 00:23:48.964552  301716 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 00:23:48.979942  301716 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 00:23:48.980783  301716 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 00:23:48.980889  301716 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 00:23:49.051943  301716 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0924 00:23:49.051970  301716 cache.go:56] Caching tarball of preloaded images
	I0924 00:23:49.052732  301716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 00:23:49.055394  301716 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0924 00:23:49.055422  301716 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0924 00:23:49.139839  301716 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0924 00:23:54.324401  301716 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0924 00:23:54.324539  301716 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0924 00:23:55.418722  301716 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0924 00:23:55.419208  301716 profile.go:143] Saving config to /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/download-only-142004/config.json ...
	I0924 00:23:55.419246  301716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/download-only-142004/config.json: {Name:mk16d4dc84ff454e0284ae7d14a6f0aa4059a09c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0924 00:23:55.419446  301716 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0924 00:23:55.419636  301716 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-142004 host does not exist
	  To start a cluster, run: "minikube start -p download-only-142004"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-142004
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.89s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-713417 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-713417 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.890563876s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.89s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0924 00:24:09.747630  301711 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0924 00:24:09.747673  301711 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-713417
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-713417: exit status 85 (76.977802ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-142004 | jenkins | v1.34.0 | 24 Sep 24 00:23 UTC |                     |
	|         | -p download-only-142004        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| delete  | -p download-only-142004        | download-only-142004 | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC | 24 Sep 24 00:24 UTC |
	| start   | -o=json --download-only        | download-only-713417 | jenkins | v1.34.0 | 24 Sep 24 00:24 UTC |                     |
	|         | -p download-only-713417        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/24 00:24:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0924 00:24:03.901658  301916 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:24:03.901796  301916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:24:03.901807  301916 out.go:358] Setting ErrFile to fd 2...
	I0924 00:24:03.901812  301916 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:24:03.902045  301916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:24:03.902450  301916 out.go:352] Setting JSON to true
	I0924 00:24:03.903343  301916 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7589,"bootTime":1727129855,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 00:24:03.903421  301916 start.go:139] virtualization:  
	I0924 00:24:03.906214  301916 out.go:97] [download-only-713417] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 00:24:03.906401  301916 notify.go:220] Checking for updates...
	I0924 00:24:03.909062  301916 out.go:169] MINIKUBE_LOCATION=19696
	I0924 00:24:03.911554  301916 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:24:03.913834  301916 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:24:03.915803  301916 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 00:24:03.918060  301916 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0924 00:24:03.921983  301916 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0924 00:24:03.922276  301916 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:24:03.949332  301916 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 00:24:03.949441  301916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:24:04.008493  301916 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 00:24:03.991624054 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:24:04.008650  301916 docker.go:318] overlay module found
	I0924 00:24:04.011058  301916 out.go:97] Using the docker driver based on user configuration
	I0924 00:24:04.011122  301916 start.go:297] selected driver: docker
	I0924 00:24:04.011133  301916 start.go:901] validating driver "docker" against <nil>
	I0924 00:24:04.011316  301916 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:24:04.064125  301916 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-24 00:24:04.054102217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:24:04.064346  301916 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0924 00:24:04.064683  301916 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0924 00:24:04.064883  301916 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0924 00:24:04.068051  301916 out.go:169] Using Docker driver with root privileges
	I0924 00:24:04.070995  301916 cni.go:84] Creating CNI manager for ""
	I0924 00:24:04.071082  301916 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0924 00:24:04.071099  301916 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0924 00:24:04.071214  301916 start.go:340] cluster config:
	{Name:download-only-713417 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-713417 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:24:04.073816  301916 out.go:97] Starting "download-only-713417" primary control-plane node in "download-only-713417" cluster
	I0924 00:24:04.073847  301916 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0924 00:24:04.076087  301916 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0924 00:24:04.076125  301916 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 00:24:04.076222  301916 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0924 00:24:04.092795  301916 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0924 00:24:04.092941  301916 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0924 00:24:04.092965  301916 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0924 00:24:04.092971  301916 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0924 00:24:04.092979  301916 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0924 00:24:04.133741  301916 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0924 00:24:04.133767  301916 cache.go:56] Caching tarball of preloaded images
	I0924 00:24:04.133937  301916 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0924 00:24:04.136165  301916 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0924 00:24:04.136195  301916 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0924 00:24:04.216810  301916 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0924 00:24:08.098722  301916 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0924 00:24:08.098842  301916 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19696-296322/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-713417 host does not exist
	  To start a cluster, run: "minikube start -p download-only-713417"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-713417
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
I0924 00:24:11.009438  301711 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-584606 --alsologtostderr --binary-mirror http://127.0.0.1:42843 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-584606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-584606
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-321431
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-321431: exit status 85 (70.468963ms)

                                                
                                                
-- stdout --
	* Profile "addons-321431" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-321431"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-321431
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-321431: exit status 85 (69.120608ms)

                                                
                                                
-- stdout --
	* Profile "addons-321431" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-321431"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (214s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-321431 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-321431 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m34.003407097s)
--- PASS: TestAddons/Setup (214.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-321431 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-321431 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 2.75903ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-r8jb7" [01589e47-58af-41e9-8d33-bcfce48c058f] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004032215s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-2xfkh" [d06ba723-4918-49cb-be93-a014b45358bd] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004315735s
addons_test.go:338: (dbg) Run:  kubectl --context addons-321431 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-321431 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-321431 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.939115761s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 ip
2024/09/24 00:31:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-321431 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-321431 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-321431 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a0b9fc8c-2cb8-4726-9997-187dbd390b3d] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a0b9fc8c-2cb8-4726-9997-187dbd390b3d] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004766194s
I0924 00:32:07.306806  301711 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-321431 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable ingress-dns --alsologtostderr -v=1: (1.322558604s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable ingress --alsologtostderr -v=1: (7.844377764s)
--- PASS: TestAddons/parallel/Ingress (19.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-f92dm" [fa6798a0-d84e-4378-9827-737567976910] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004816694s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-321431
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-321431: (6.084228611s)
--- PASS: TestAddons/parallel/InspektorGadget (11.09s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.98s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.473116ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-mdzrd" [b72987f5-368f-4a5a-856e-a80311906c88] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003980688s
addons_test.go:413: (dbg) Run:  kubectl --context addons-321431 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0924 00:31:39.552814  301711 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0924 00:31:39.557979  301711 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0924 00:31:39.558014  301711 kapi.go:107] duration metric: took 7.830273ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 7.840768ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-321431 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-321431 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7bce2af9-a580-4fc5-8422-38cb9e76dd05] Pending
helpers_test.go:344: "task-pv-pod" [7bce2af9-a580-4fc5-8422-38cb9e76dd05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [7bce2af9-a580-4fc5-8422-38cb9e76dd05] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0044618s
addons_test.go:528: (dbg) Run:  kubectl --context addons-321431 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-321431 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-321431 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-321431 delete pod task-pv-pod
addons_test.go:538: (dbg) Done: kubectl --context addons-321431 delete pod task-pv-pod: (1.27025318s)
addons_test.go:544: (dbg) Run:  kubectl --context addons-321431 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-321431 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-321431 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [50045c28-1c72-4d1a-b7e4-0fcc08f59ab7] Pending
helpers_test.go:344: "task-pv-pod-restore" [50045c28-1c72-4d1a-b7e4-0fcc08f59ab7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [50045c28-1c72-4d1a-b7e4-0fcc08f59ab7] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00482189s
addons_test.go:570: (dbg) Run:  kubectl --context addons-321431 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-321431 delete pod task-pv-pod-restore: (1.447388803s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-321431 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-321431 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.727417536s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable volumesnapshots --alsologtostderr -v=1: (1.018341538s)
--- PASS: TestAddons/parallel/CSI (40.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-321431 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-b4hbw" [794af37b-498f-42d1-b655-0692f1b5ca14] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-b4hbw" [794af37b-498f-42d1-b655-0692f1b5ca14] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-b4hbw" [794af37b-498f-42d1-b655-0692f1b5ca14] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003911219s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable headlamp --alsologtostderr -v=1: (5.963296456s)
--- PASS: TestAddons/parallel/Headlamp (15.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-ff59c" [ba49566b-ae69-4ba8-904b-352cdb74599c] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003664504s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-321431
--- PASS: TestAddons/parallel/CloudSpanner (5.57s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.85s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-321431 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-321431 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [d1860826-171d-4b5d-bb2c-7cfa2d9f7337] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [d1860826-171d-4b5d-bb2c-7cfa2d9f7337] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [d1860826-171d-4b5d-bb2c-7cfa2d9f7337] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003646293s
addons_test.go:938: (dbg) Run:  kubectl --context addons-321431 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 ssh "cat /opt/local-path-provisioner/pvc-d8cf0782-07d6-40a9-8607-b32c6bab06f5_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-321431 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-321431 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.681316202s)
--- PASS: TestAddons/parallel/LocalPath (52.85s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ngbxz" [8a3a3c47-bc68-4829-847f-7b602161033b] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004760266s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-321431
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-cmn8w" [70a43a8f-591f-4f72-8908-edfae1452b72] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005019744s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-321431 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-321431 addons disable yakd --alsologtostderr -v=1: (5.815550964s)
--- PASS: TestAddons/parallel/Yakd (11.82s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-321431
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-321431: (12.02281879s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-321431
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-321431
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-321431
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (41.71s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-649069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-649069 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.075950126s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-649069 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-649069 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-649069 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-649069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-649069
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-649069: (1.970040001s)
--- PASS: TestCertOptions (41.71s)

                                                
                                    
x
+
TestCertExpiration (231.81s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-136100 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-136100 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (43.141251646s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-136100 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-136100 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.375493904s)
helpers_test.go:175: Cleaning up "cert-expiration-136100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-136100
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-136100: (2.293654422s)
--- PASS: TestCertExpiration (231.81s)

                                                
                                    
x
+
TestForceSystemdFlag (36.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-492287 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-492287 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.586296016s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-492287 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-492287" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-492287
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-492287: (2.025591914s)
--- PASS: TestForceSystemdFlag (36.93s)

                                                
                                    
x
+
TestForceSystemdEnv (35.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-980707 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-980707 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (32.890849374s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-980707 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-980707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-980707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-980707: (2.344090423s)
--- PASS: TestForceSystemdEnv (35.81s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.74s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-589498 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-589498 --driver=docker  --container-runtime=containerd: (28.060336702s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-589498"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-589498": (1.036359356s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Uwq4PyS5uKL4/agent.320997" SSH_AGENT_PID="320998" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Uwq4PyS5uKL4/agent.320997" SSH_AGENT_PID="320998" DOCKER_HOST=ssh://docker@127.0.0.1:33145 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Uwq4PyS5uKL4/agent.320997" SSH_AGENT_PID="320998" DOCKER_HOST=ssh://docker@127.0.0.1:33145 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.152651181s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Uwq4PyS5uKL4/agent.320997" SSH_AGENT_PID="320998" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-Uwq4PyS5uKL4/agent.320997" SSH_AGENT_PID="320998" DOCKER_HOST=ssh://docker@127.0.0.1:33145 docker image ls": (1.02215553s)
helpers_test.go:175: Cleaning up "dockerenv-589498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-589498
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-589498: (1.986305142s)
--- PASS: TestDockerEnvContainerd (43.74s)

                                                
                                    
x
+
TestErrorSpam/setup (29.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-061499 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-061499 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-061499 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-061499 --driver=docker  --container-runtime=containerd: (29.788999319s)
--- PASS: TestErrorSpam/setup (29.79s)

                                                
                                    
x
+
TestErrorSpam/start (0.71s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 start --dry-run
--- PASS: TestErrorSpam/start (0.71s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 pause
--- PASS: TestErrorSpam/pause (1.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 stop: (1.273758465s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-061499 --log_dir /tmp/nospam-061499 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19696-296322/.minikube/files/etc/test/nested/copy/301711/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-346828 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m20.16136507s)
--- PASS: TestFunctional/serial/StartWithProxy (80.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.5s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0924 00:36:21.568022  301711 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-346828 --alsologtostderr -v=8: (6.496203012s)
functional_test.go:663: soft start took 6.503330416s for "functional-346828" cluster.
I0924 00:36:28.064614  301711 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.50s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-346828 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:3.1: (1.57883933s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:3.3: (1.483042682s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 cache add registry.k8s.io/pause:latest: (1.278144803s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-346828 /tmp/TestFunctionalserialCacheCmdcacheadd_local2347771708/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache add minikube-local-cache-test:functional-346828
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache delete minikube-local-cache-test:functional-346828
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-346828
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (308.977851ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 cache reload: (1.189069523s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 kubectl -- --context functional-346828 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-346828 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.37s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-346828 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.366820525s)
functional_test.go:761: restart took 46.366953448s for "functional-346828" cluster.
I0924 00:37:23.073917  301711 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (46.37s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-346828 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 logs: (1.713994712s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 logs --file /tmp/TestFunctionalserialLogsFileCmd3108505960/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 logs --file /tmp/TestFunctionalserialLogsFileCmd3108505960/001/logs.txt: (1.728078678s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.28s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-346828 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-346828
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-346828: exit status 115 (796.816793ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31689 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-346828 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-346828 delete -f testdata/invalidsvc.yaml: (1.232134026s)
--- PASS: TestFunctional/serial/InvalidService (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 config get cpus: exit status 14 (71.578479ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 config get cpus: exit status 14 (83.53522ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-346828 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-346828 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 335920: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (12.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-346828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (193.748762ms)

                                                
                                                
-- stdout --
	* [functional-346828] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:38:03.944865  335558 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:38:03.945059  335558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:38:03.945071  335558 out.go:358] Setting ErrFile to fd 2...
	I0924 00:38:03.945076  335558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:38:03.945333  335558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:38:03.945731  335558 out.go:352] Setting JSON to false
	I0924 00:38:03.946821  335558 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8429,"bootTime":1727129855,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 00:38:03.946953  335558 start.go:139] virtualization:  
	I0924 00:38:03.949346  335558 out.go:177] * [functional-346828] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 00:38:03.952204  335558 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:38:03.952278  335558 notify.go:220] Checking for updates...
	I0924 00:38:03.956712  335558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:38:03.958973  335558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:38:03.961080  335558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 00:38:03.963150  335558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 00:38:03.965422  335558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:38:03.968246  335558 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:38:03.968765  335558 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:38:04.024105  335558 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 00:38:04.024242  335558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:38:04.078188  335558 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 00:38:04.068459513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:38:04.078304  335558 docker.go:318] overlay module found
	I0924 00:38:04.080569  335558 out.go:177] * Using the docker driver based on existing profile
	I0924 00:38:04.082826  335558 start.go:297] selected driver: docker
	I0924 00:38:04.082865  335558 start.go:901] validating driver "docker" against &{Name:functional-346828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-346828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:38:04.083058  335558 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:38:04.085859  335558 out.go:201] 
	W0924 00:38:04.088195  335558 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0924 00:38:04.090325  335558 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-346828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-346828 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (181.523142ms)

                                                
                                                
-- stdout --
	* [functional-346828] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:38:03.770848  335511 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:38:03.771027  335511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:38:03.771052  335511 out.go:358] Setting ErrFile to fd 2...
	I0924 00:38:03.771068  335511 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:38:03.772016  335511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:38:03.772515  335511 out.go:352] Setting JSON to false
	I0924 00:38:03.773557  335511 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8429,"bootTime":1727129855,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 00:38:03.773632  335511 start.go:139] virtualization:  
	I0924 00:38:03.776450  335511 out.go:177] * [functional-346828] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0924 00:38:03.779118  335511 notify.go:220] Checking for updates...
	I0924 00:38:03.779964  335511 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 00:38:03.782248  335511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 00:38:03.784970  335511 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 00:38:03.786988  335511 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 00:38:03.789062  335511 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 00:38:03.790973  335511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 00:38:03.793804  335511 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:38:03.794384  335511 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 00:38:03.828048  335511 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 00:38:03.828198  335511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:38:03.884541  335511 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 00:38:03.873712997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:38:03.884655  335511 docker.go:318] overlay module found
	I0924 00:38:03.886996  335511 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0924 00:38:03.889796  335511 start.go:297] selected driver: docker
	I0924 00:38:03.889827  335511 start.go:901] validating driver "docker" against &{Name:functional-346828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-346828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0924 00:38:03.889943  335511 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 00:38:03.892373  335511 out.go:201] 
	W0924 00:38:03.894674  335511 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0924 00:38:03.896894  335511 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-346828 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-346828 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-2bdqq" [13fc090c-e65e-4565-adc4-1a42fea908b1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-2bdqq" [13fc090c-e65e-4565-adc4-1a42fea908b1] Running
E0924 00:37:48.254751  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003426722s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31728
functional_test.go:1675: http://192.168.49.2:31728: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-2bdqq

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31728
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [de45717e-6096-4935-a157-9e1b03dbc5a6] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003788783s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-346828 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-346828 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-346828 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-346828 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [843a389e-caf8-47dc-bb2d-b19f62c76307] Pending
helpers_test.go:344: "sp-pod" [843a389e-caf8-47dc-bb2d-b19f62c76307] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [843a389e-caf8-47dc-bb2d-b19f62c76307] Running
E0924 00:37:45.684346  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:45.690664  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:45.702369  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:45.723922  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:45.765624  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:45.847385  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:46.009201  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:46.330865  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:37:46.973070  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004289271s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-346828 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-346828 delete -f testdata/storage-provisioner/pod.yaml
E0924 00:37:50.816629  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-346828 delete -f testdata/storage-provisioner/pod.yaml: (1.083073767s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-346828 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c801d8f-9283-472d-8f17-cfb9478ef0b1] Pending
helpers_test.go:344: "sp-pod" [1c801d8f-9283-472d-8f17-cfb9478ef0b1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004445831s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-346828 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh -n functional-346828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cp functional-346828:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3708770592/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh -n functional-346828 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh -n functional-346828 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.27s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/301711/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /etc/test/nested/copy/301711/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/301711.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /etc/ssl/certs/301711.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/301711.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /usr/share/ca-certificates/301711.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3017112.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /etc/ssl/certs/3017112.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3017112.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /usr/share/ca-certificates/3017112.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-346828 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh "sudo systemctl is-active docker": exit status 1 (378.220335ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh "sudo systemctl is-active crio": exit status 1 (391.668159ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 332866: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-346828 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [88073cac-7bb4-44f0-b73e-7f21dd88ed00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [88073cac-7bb4-44f0-b73e-7f21dd88ed00] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004166614s
I0924 00:37:43.052706  301711 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-346828 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.180.244 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-346828 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-346828 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-346828 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-8pccc" [e86288ab-0cfe-41e4-ab3e-2d1d24453e12] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-8pccc" [e86288ab-0cfe-41e4-ab3e-2d1d24453e12] Running
E0924 00:37:55.938237  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.010458572s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "350.666965ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "60.819263ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "339.660504ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "57.693826ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdany-port1568310596/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727138278549165818" to /tmp/TestFunctionalparallelMountCmdany-port1568310596/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727138278549165818" to /tmp/TestFunctionalparallelMountCmdany-port1568310596/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727138278549165818" to /tmp/TestFunctionalparallelMountCmdany-port1568310596/001/test-1727138278549165818
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (323.967657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 00:37:58.873401  301711 retry.go:31] will retry after 395.12891ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 24 00:37 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 24 00:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 24 00:37 test-1727138278549165818
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh cat /mount-9p/test-1727138278549165818
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-346828 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [525cca8a-e78f-4a7e-b601-60703193b1b3] Pending
helpers_test.go:344: "busybox-mount" [525cca8a-e78f-4a7e-b601-60703193b1b3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [525cca8a-e78f-4a7e-b601-60703193b1b3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [525cca8a-e78f-4a7e-b601-60703193b1b3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003405069s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-346828 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh stat /mount-9p/created-by-pod
E0924 00:38:06.179917  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdany-port1568310596/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service list -o json
functional_test.go:1494: Took "539.95809ms" to run "out/minikube-linux-arm64 -p functional-346828 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30671
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdspecific-port3655630065/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (361.203326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0924 00:38:07.259434  301711 retry.go:31] will retry after 406.14518ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdspecific-port3655630065/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh "sudo umount -f /mount-9p": exit status 1 (281.587744ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-346828 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdspecific-port3655630065/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-346828 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-346828 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1280202121/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 version -o=json --components: (1.292355226s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-346828 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-346828
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-346828
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-346828 image ls --format short --alsologtostderr:
I0924 00:38:19.537042  338280 out.go:345] Setting OutFile to fd 1 ...
I0924 00:38:19.537459  338280 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:19.537489  338280 out.go:358] Setting ErrFile to fd 2...
I0924 00:38:19.537509  338280 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:19.537796  338280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
I0924 00:38:19.538669  338280 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:19.539035  338280 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:19.539739  338280 cli_runner.go:164] Run: docker container inspect functional-346828 --format={{.State.Status}}
I0924 00:38:19.556535  338280 ssh_runner.go:195] Run: systemctl --version
I0924 00:38:19.556587  338280 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-346828
I0924 00:38:19.575613  338280 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/functional-346828/id_rsa Username:docker}
I0924 00:38:19.677894  338280 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-346828 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-346828  | sha256:ce2d2c | 2.17MB |
| docker.io/library/minikube-local-cache-test | functional-346828  | sha256:b7ea74 | 990B   |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-346828 image ls --format table --alsologtostderr:
I0924 00:38:20.330658  338487 out.go:345] Setting OutFile to fd 1 ...
I0924 00:38:20.330825  338487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.330832  338487 out.go:358] Setting ErrFile to fd 2...
I0924 00:38:20.330838  338487 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.331248  338487 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
I0924 00:38:20.331933  338487 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.332060  338487 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.332604  338487 cli_runner.go:164] Run: docker container inspect functional-346828 --format={{.State.Status}}
I0924 00:38:20.352176  338487 ssh_runner.go:195] Run: systemctl --version
I0924 00:38:20.352230  338487 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-346828
I0924 00:38:20.371868  338487 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/functional-346828/id_rsa Username:docker}
I0924 00:38:20.467623  338487 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-346828 image ls --format json --alsologtostderr:
[{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-346828"],"size":"2173567"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repo
Digests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size"
:"33309097"},{"id":"sha256:b7ea7480ffd693b81fd930ff92b75785adaba6cd5ac80b2fa84eecf31bb99d71","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-346828"],"size":"990"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:
v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b74
6dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-346828 image ls --format json --alsologtostderr:
I0924 00:38:20.048845  338427 out.go:345] Setting OutFile to fd 1 ...
I0924 00:38:20.049044  338427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.049071  338427 out.go:358] Setting ErrFile to fd 2...
I0924 00:38:20.049093  338427 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.049361  338427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
I0924 00:38:20.050110  338427 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.050290  338427 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.050841  338427 cli_runner.go:164] Run: docker container inspect functional-346828 --format={{.State.Status}}
I0924 00:38:20.071746  338427 ssh_runner.go:195] Run: systemctl --version
I0924 00:38:20.071878  338427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-346828
I0924 00:38:20.095452  338427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/functional-346828/id_rsa Username:docker}
I0924 00:38:20.195377  338427 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-346828 image ls --format yaml --alsologtostderr:
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b7ea7480ffd693b81fd930ff92b75785adaba6cd5ac80b2fa84eecf31bb99d71
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-346828
size: "990"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-346828
size: "2173567"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-346828 image ls --format yaml --alsologtostderr:
I0924 00:38:19.733526  338332 out.go:345] Setting OutFile to fd 1 ...
I0924 00:38:19.733643  338332 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:19.733648  338332 out.go:358] Setting ErrFile to fd 2...
I0924 00:38:19.733659  338332 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:19.733998  338332 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
I0924 00:38:19.735119  338332 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:19.735292  338332 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:19.735905  338332 cli_runner.go:164] Run: docker container inspect functional-346828 --format={{.State.Status}}
I0924 00:38:19.757914  338332 ssh_runner.go:195] Run: systemctl --version
I0924 00:38:19.757973  338332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-346828
I0924 00:38:19.795285  338332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/functional-346828/id_rsa Username:docker}
I0924 00:38:19.891918  338332 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-346828 ssh pgrep buildkitd: exit status 1 (341.922895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image build -t localhost/my-image:functional-346828 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 image build -t localhost/my-image:functional-346828 testdata/build --alsologtostderr: (3.194105857s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-346828 image build -t localhost/my-image:functional-346828 testdata/build --alsologtostderr:
I0924 00:38:20.146738  338451 out.go:345] Setting OutFile to fd 1 ...
I0924 00:38:20.147219  338451 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.147232  338451 out.go:358] Setting ErrFile to fd 2...
I0924 00:38:20.147239  338451 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0924 00:38:20.147489  338451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
I0924 00:38:20.148168  338451 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.148778  338451 config.go:182] Loaded profile config "functional-346828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0924 00:38:20.149368  338451 cli_runner.go:164] Run: docker container inspect functional-346828 --format={{.State.Status}}
I0924 00:38:20.167571  338451 ssh_runner.go:195] Run: systemctl --version
I0924 00:38:20.167634  338451 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-346828
I0924 00:38:20.188509  338451 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/functional-346828/id_rsa Username:docker}
I0924 00:38:20.280342  338451 build_images.go:161] Building image from path: /tmp/build.1933320849.tar
I0924 00:38:20.280419  338451 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0924 00:38:20.295194  338451 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1933320849.tar
I0924 00:38:20.305801  338451 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1933320849.tar: stat -c "%s %y" /var/lib/minikube/build/build.1933320849.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1933320849.tar': No such file or directory
I0924 00:38:20.305834  338451 ssh_runner.go:362] scp /tmp/build.1933320849.tar --> /var/lib/minikube/build/build.1933320849.tar (3072 bytes)
I0924 00:38:20.340759  338451 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1933320849
I0924 00:38:20.350646  338451 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1933320849 -xf /var/lib/minikube/build/build.1933320849.tar
I0924 00:38:20.361467  338451 containerd.go:394] Building image: /var/lib/minikube/build/build.1933320849
I0924 00:38:20.361556  338451 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1933320849 --local dockerfile=/var/lib/minikube/build/build.1933320849 --output type=image,name=localhost/my-image:functional-346828
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d96a36865ee2c8b884adb76b5dda8f80f01f1d033acf1d439353b0b329f5a0f5 0.0s done
#8 exporting config sha256:63f07b04be141a1e8d2d2bc82557c6163e3bfb95addbde8d433c4db8e2deed3f 0.0s done
#8 naming to localhost/my-image:functional-346828 done
#8 DONE 0.1s
I0924 00:38:23.247195  338451 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1933320849 --local dockerfile=/var/lib/minikube/build/build.1933320849 --output type=image,name=localhost/my-image:functional-346828: (2.885606265s)
I0924 00:38:23.247267  338451 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1933320849
I0924 00:38:23.257499  338451 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1933320849.tar
I0924 00:38:23.267578  338451 build_images.go:217] Built localhost/my-image:functional-346828 from /tmp/build.1933320849.tar
I0924 00:38:23.267610  338451 build_images.go:133] succeeded building to: functional-346828
I0924 00:38:23.267616  338451 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-346828
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image load --daemon kicbase/echo-server:functional-346828 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-346828 image load --daemon kicbase/echo-server:functional-346828 --alsologtostderr: (1.13042027s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image load --daemon kicbase/echo-server:functional-346828 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-346828
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image load --daemon kicbase/echo-server:functional-346828 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image save kicbase/echo-server:functional-346828 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
2024/09/24 00:38:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image rm kicbase/echo-server:functional-346828 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-346828
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-346828 image save --daemon kicbase/echo-server:functional-346828 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-346828
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-346828
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-346828
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-346828
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (133.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-166718 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 00:38:26.661903  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:39:07.625159  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:40:29.547417  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-166718 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m12.471923851s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (133.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (35.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-166718 -- rollout status deployment/busybox: (32.652202411s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-74mwv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-8n7vs -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-kw78f -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-74mwv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-8n7vs -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-kw78f -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-74mwv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-8n7vs -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-kw78f -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (35.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-74mwv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-74mwv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-8n7vs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-8n7vs -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-kw78f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-166718 -- exec busybox-7dff88458-kw78f -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-166718 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-166718 -v=7 --alsologtostderr: (22.081791653s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr: (1.028844116s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-166718 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.003286417s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 status --output json -v=7 --alsologtostderr: (1.057542382s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp testdata/cp-test.txt ha-166718:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2789929691/001/cp-test_ha-166718.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718:/home/docker/cp-test.txt ha-166718-m02:/home/docker/cp-test_ha-166718_ha-166718-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test_ha-166718_ha-166718-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718:/home/docker/cp-test.txt ha-166718-m03:/home/docker/cp-test_ha-166718_ha-166718-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test_ha-166718_ha-166718-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718:/home/docker/cp-test.txt ha-166718-m04:/home/docker/cp-test_ha-166718_ha-166718-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test_ha-166718_ha-166718-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp testdata/cp-test.txt ha-166718-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2789929691/001/cp-test_ha-166718-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m02:/home/docker/cp-test.txt ha-166718:/home/docker/cp-test_ha-166718-m02_ha-166718.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test_ha-166718-m02_ha-166718.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m02:/home/docker/cp-test.txt ha-166718-m03:/home/docker/cp-test_ha-166718-m02_ha-166718-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test_ha-166718-m02_ha-166718-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m02:/home/docker/cp-test.txt ha-166718-m04:/home/docker/cp-test_ha-166718-m02_ha-166718-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test_ha-166718-m02_ha-166718-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp testdata/cp-test.txt ha-166718-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2789929691/001/cp-test_ha-166718-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m03:/home/docker/cp-test.txt ha-166718:/home/docker/cp-test_ha-166718-m03_ha-166718.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test_ha-166718-m03_ha-166718.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m03:/home/docker/cp-test.txt ha-166718-m02:/home/docker/cp-test_ha-166718-m03_ha-166718-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test_ha-166718-m03_ha-166718-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m03:/home/docker/cp-test.txt ha-166718-m04:/home/docker/cp-test_ha-166718-m03_ha-166718-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test_ha-166718-m03_ha-166718-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp testdata/cp-test.txt ha-166718-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2789929691/001/cp-test_ha-166718-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m04:/home/docker/cp-test.txt ha-166718:/home/docker/cp-test_ha-166718-m04_ha-166718.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718 "sudo cat /home/docker/cp-test_ha-166718-m04_ha-166718.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m04:/home/docker/cp-test.txt ha-166718-m02:/home/docker/cp-test_ha-166718-m04_ha-166718-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m02 "sudo cat /home/docker/cp-test_ha-166718-m04_ha-166718-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 cp ha-166718-m04:/home/docker/cp-test.txt ha-166718-m03:/home/docker/cp-test_ha-166718-m04_ha-166718-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 ssh -n ha-166718-m03 "sudo cat /home/docker/cp-test_ha-166718-m04_ha-166718-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 node stop m02 -v=7 --alsologtostderr: (12.340831637s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr: exit status 7 (744.730065ms)

                                                
                                                
-- stdout --
	ha-166718
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-166718-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166718-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-166718-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:42:12.467857  354826 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:42:12.467983  354826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:42:12.467993  354826 out.go:358] Setting ErrFile to fd 2...
	I0924 00:42:12.467999  354826 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:42:12.468253  354826 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:42:12.468451  354826 out.go:352] Setting JSON to false
	I0924 00:42:12.468491  354826 mustload.go:65] Loading cluster: ha-166718
	I0924 00:42:12.468656  354826 notify.go:220] Checking for updates...
	I0924 00:42:12.468922  354826 config.go:182] Loaded profile config "ha-166718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:42:12.468950  354826 status.go:174] checking status of ha-166718 ...
	I0924 00:42:12.469505  354826 cli_runner.go:164] Run: docker container inspect ha-166718 --format={{.State.Status}}
	I0924 00:42:12.489497  354826 status.go:364] ha-166718 host status = "Running" (err=<nil>)
	I0924 00:42:12.489523  354826 host.go:66] Checking if "ha-166718" exists ...
	I0924 00:42:12.489840  354826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-166718
	I0924 00:42:12.520850  354826 host.go:66] Checking if "ha-166718" exists ...
	I0924 00:42:12.521201  354826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:42:12.521244  354826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-166718
	I0924 00:42:12.542268  354826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33160 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/ha-166718/id_rsa Username:docker}
	I0924 00:42:12.641450  354826 ssh_runner.go:195] Run: systemctl --version
	I0924 00:42:12.646351  354826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:42:12.662238  354826 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:42:12.729773  354826 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-24 00:42:12.719930067 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:42:12.730427  354826 kubeconfig.go:125] found "ha-166718" server: "https://192.168.49.254:8443"
	I0924 00:42:12.730467  354826 api_server.go:166] Checking apiserver status ...
	I0924 00:42:12.730518  354826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:42:12.742138  354826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1493/cgroup
	I0924 00:42:12.755950  354826 api_server.go:182] apiserver freezer: "9:freezer:/docker/f54fd5abefa3409e95906b7102b762ff54266e636823236785ca7149d00ebbdd/kubepods/burstable/pode8a152712262b094909353b664481188/bcac9240b8f10e1587f169bfea5ce532a351072c4fd9eac91dd2f3e47a168c28"
	I0924 00:42:12.756036  354826 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f54fd5abefa3409e95906b7102b762ff54266e636823236785ca7149d00ebbdd/kubepods/burstable/pode8a152712262b094909353b664481188/bcac9240b8f10e1587f169bfea5ce532a351072c4fd9eac91dd2f3e47a168c28/freezer.state
	I0924 00:42:12.766256  354826 api_server.go:204] freezer state: "THAWED"
	I0924 00:42:12.766282  354826 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 00:42:12.775555  354826 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 00:42:12.775586  354826 status.go:456] ha-166718 apiserver status = Running (err=<nil>)
	I0924 00:42:12.775596  354826 status.go:176] ha-166718 status: &{Name:ha-166718 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:42:12.775612  354826 status.go:174] checking status of ha-166718-m02 ...
	I0924 00:42:12.775922  354826 cli_runner.go:164] Run: docker container inspect ha-166718-m02 --format={{.State.Status}}
	I0924 00:42:12.791717  354826 status.go:364] ha-166718-m02 host status = "Stopped" (err=<nil>)
	I0924 00:42:12.791757  354826 status.go:377] host is not running, skipping remaining checks
	I0924 00:42:12.791766  354826 status.go:176] ha-166718-m02 status: &{Name:ha-166718-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:42:12.791785  354826 status.go:174] checking status of ha-166718-m03 ...
	I0924 00:42:12.792171  354826 cli_runner.go:164] Run: docker container inspect ha-166718-m03 --format={{.State.Status}}
	I0924 00:42:12.810993  354826 status.go:364] ha-166718-m03 host status = "Running" (err=<nil>)
	I0924 00:42:12.811021  354826 host.go:66] Checking if "ha-166718-m03" exists ...
	I0924 00:42:12.811352  354826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-166718-m03
	I0924 00:42:12.828635  354826 host.go:66] Checking if "ha-166718-m03" exists ...
	I0924 00:42:12.828945  354826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:42:12.828989  354826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-166718-m03
	I0924 00:42:12.846629  354826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/ha-166718-m03/id_rsa Username:docker}
	I0924 00:42:12.949878  354826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:42:12.964715  354826 kubeconfig.go:125] found "ha-166718" server: "https://192.168.49.254:8443"
	I0924 00:42:12.964744  354826 api_server.go:166] Checking apiserver status ...
	I0924 00:42:12.964792  354826 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:42:12.975989  354826 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1321/cgroup
	I0924 00:42:12.985383  354826 api_server.go:182] apiserver freezer: "9:freezer:/docker/4d5ca332e9c98523e5f4d5fa5d6fc98a558c20c3a646dd45a4f11b3f74adec8b/kubepods/burstable/podbefa75f0e964eb792e3ef0717ced9764/e7727db9ef5dca61b44eb44543d3e67849a9a39b5eb94ddcc21f7bfc74cedbff"
	I0924 00:42:12.985459  354826 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4d5ca332e9c98523e5f4d5fa5d6fc98a558c20c3a646dd45a4f11b3f74adec8b/kubepods/burstable/podbefa75f0e964eb792e3ef0717ced9764/e7727db9ef5dca61b44eb44543d3e67849a9a39b5eb94ddcc21f7bfc74cedbff/freezer.state
	I0924 00:42:12.994569  354826 api_server.go:204] freezer state: "THAWED"
	I0924 00:42:12.994600  354826 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0924 00:42:13.003784  354826 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0924 00:42:13.003824  354826 status.go:456] ha-166718-m03 apiserver status = Running (err=<nil>)
	I0924 00:42:13.003835  354826 status.go:176] ha-166718-m03 status: &{Name:ha-166718-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:42:13.003854  354826 status.go:174] checking status of ha-166718-m04 ...
	I0924 00:42:13.004186  354826 cli_runner.go:164] Run: docker container inspect ha-166718-m04 --format={{.State.Status}}
	I0924 00:42:13.021322  354826 status.go:364] ha-166718-m04 host status = "Running" (err=<nil>)
	I0924 00:42:13.021352  354826 host.go:66] Checking if "ha-166718-m04" exists ...
	I0924 00:42:13.021680  354826 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-166718-m04
	I0924 00:42:13.038338  354826 host.go:66] Checking if "ha-166718-m04" exists ...
	I0924 00:42:13.038666  354826 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:42:13.038711  354826 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-166718-m04
	I0924 00:42:13.056118  354826 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33175 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/ha-166718-m04/id_rsa Username:docker}
	I0924 00:42:13.147920  354826 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:42:13.160746  354826 status.go:176] ha-166718-m04 status: &{Name:ha-166718-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (29.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 node start m02 -v=7 --alsologtostderr
E0924 00:42:33.614083  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.620546  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.631984  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.653321  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.694643  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.776051  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:33.937837  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:34.259695  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:34.901562  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:36.182997  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:38.744351  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 node start m02 -v=7 --alsologtostderr: (27.897189488s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr: (1.069175926s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (29.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0924 00:42:43.865708  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.064585169s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-166718 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-166718 -v=7 --alsologtostderr
E0924 00:42:45.682523  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:42:54.107146  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:43:13.388828  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:43:14.588551  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-166718 -v=7 --alsologtostderr: (37.18385136s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-166718 --wait=true -v=7 --alsologtostderr
E0924 00:43:55.550123  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-166718 --wait=true -v=7 --alsologtostderr: (1m25.978918881s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-166718
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (123.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 node delete m03 -v=7 --alsologtostderr: (9.498960448s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 stop -v=7 --alsologtostderr
E0924 00:45:17.471524  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 stop -v=7 --alsologtostderr: (36.524955905s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr: exit status 7 (119.780362ms)

                                                
                                                
-- stdout --
	ha-166718
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166718-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-166718-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:45:35.132696  369145 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:45:35.132938  369145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:45:35.132986  369145 out.go:358] Setting ErrFile to fd 2...
	I0924 00:45:35.133012  369145 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:45:35.133678  369145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:45:35.133909  369145 out.go:352] Setting JSON to false
	I0924 00:45:35.133953  369145 mustload.go:65] Loading cluster: ha-166718
	I0924 00:45:35.134036  369145 notify.go:220] Checking for updates...
	I0924 00:45:35.135143  369145 config.go:182] Loaded profile config "ha-166718": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:45:35.135174  369145 status.go:174] checking status of ha-166718 ...
	I0924 00:45:35.135800  369145 cli_runner.go:164] Run: docker container inspect ha-166718 --format={{.State.Status}}
	I0924 00:45:35.153923  369145 status.go:364] ha-166718 host status = "Stopped" (err=<nil>)
	I0924 00:45:35.153947  369145 status.go:377] host is not running, skipping remaining checks
	I0924 00:45:35.153955  369145 status.go:176] ha-166718 status: &{Name:ha-166718 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:45:35.153990  369145 status.go:174] checking status of ha-166718-m02 ...
	I0924 00:45:35.154337  369145 cli_runner.go:164] Run: docker container inspect ha-166718-m02 --format={{.State.Status}}
	I0924 00:45:35.182675  369145 status.go:364] ha-166718-m02 host status = "Stopped" (err=<nil>)
	I0924 00:45:35.182702  369145 status.go:377] host is not running, skipping remaining checks
	I0924 00:45:35.182709  369145 status.go:176] ha-166718-m02 status: &{Name:ha-166718-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:45:35.182734  369145 status.go:174] checking status of ha-166718-m04 ...
	I0924 00:45:35.183100  369145 cli_runner.go:164] Run: docker container inspect ha-166718-m04 --format={{.State.Status}}
	I0924 00:45:35.200826  369145 status.go:364] ha-166718-m04 host status = "Stopped" (err=<nil>)
	I0924 00:45:35.200853  369145 status.go:377] host is not running, skipping remaining checks
	I0924 00:45:35.200861  369145 status.go:176] ha-166718-m04 status: &{Name:ha-166718-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (76.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-166718 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-166718 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m15.223526259s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (76.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-166718 --control-plane -v=7 --alsologtostderr
E0924 00:47:33.613384  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-166718 --control-plane -v=7 --alsologtostderr: (44.866061503s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-166718 status -v=7 --alsologtostderr: (1.00378545s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.033280367s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-348858 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0924 00:48:01.312796  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-348858 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.387306517s)
--- PASS: TestJSONOutput/start/Command (51.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-348858 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-348858 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-348858 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-348858 --output=json --user=testUser: (5.880361277s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-543964 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-543964 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.18807ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"efea9712-7aff-4350-885f-4223cb86e8a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-543964] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0687afbd-b5f9-49fe-a801-ce7088d46b04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"a172f913-460b-4406-8e27-1f9f4193b277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8e7785e7-a874-4c6b-a00a-21fefa2e132d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig"}}
	{"specversion":"1.0","id":"b26b027a-60f7-476c-8c29-acb4e4231a9d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube"}}
	{"specversion":"1.0","id":"95d99a69-e542-47c5-907c-776347527b1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fdc3e90e-4e20-47ca-aeff-56f05d6dd554","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97265ead-98f5-447a-a7ef-0abc564a1aaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-543964" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-543964
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.71s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-792989 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-792989 --network=: (38.552692189s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-792989" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-792989
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-792989: (2.130300693s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.71s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-549302 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-549302 --network=bridge: (30.385692448s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-549302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-549302
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-549302: (2.092967624s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.51s)

                                                
                                    
x
+
TestKicExistingNetwork (35.92s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0924 00:50:03.457075  301711 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0924 00:50:03.473547  301711 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0924 00:50:03.473636  301711 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0924 00:50:03.473659  301711 cli_runner.go:164] Run: docker network inspect existing-network
W0924 00:50:03.489516  301711 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0924 00:50:03.489546  301711 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0924 00:50:03.489559  301711 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0924 00:50:03.489657  301711 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0924 00:50:03.507620  301711 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-bf68b1fb5cb5 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:10:ef:ab:db} reservation:<nil>}
I0924 00:50:03.507991  301711 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4002125a30}
I0924 00:50:03.508015  301711 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0924 00:50:03.508069  301711 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0924 00:50:03.580634  301711 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-657149 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-657149 --network=existing-network: (33.788470971s)
helpers_test.go:175: Cleaning up "existing-network-657149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-657149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-657149: (1.973483961s)
I0924 00:50:39.358781  301711 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.92s)

                                                
                                    
x
+
TestKicCustomSubnet (33.61s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-880081 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-880081 --subnet=192.168.60.0/24: (31.474126624s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-880081 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-880081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-880081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-880081: (2.114617975s)
--- PASS: TestKicCustomSubnet (33.61s)

                                                
                                    
x
+
TestKicStaticIP (34.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-059094 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-059094 --static-ip=192.168.200.200: (31.693918573s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-059094 ip
helpers_test.go:175: Cleaning up "static-ip-059094" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-059094
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-059094: (2.240563881s)
--- PASS: TestKicStaticIP (34.09s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (68.52s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-848509 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-848509 --driver=docker  --container-runtime=containerd: (30.368780348s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-851684 --driver=docker  --container-runtime=containerd
E0924 00:52:33.613716  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 00:52:45.682856  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-851684 --driver=docker  --container-runtime=containerd: (32.643115629s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-848509
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-851684
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-851684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-851684
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-851684: (1.969478777s)
helpers_test.go:175: Cleaning up "first-848509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-848509
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-848509: (2.229311824s)
--- PASS: TestMinikubeProfile (68.52s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.69s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-541809 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-541809 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.694613327s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-541809 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.09s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-543672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-543672 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.088506362s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-543672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-541809 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-541809 --alsologtostderr -v=5: (1.617143307s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-543672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-543672
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-543672: (1.213648638s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.8s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-543672
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-543672: (6.800501412s)
--- PASS: TestMountStart/serial/RestartStopped (7.80s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-543672 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203274 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 00:54:08.750537  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203274 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m6.14676137s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-203274 -- rollout status deployment/busybox: (19.011470806s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2pjj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2zv7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2pjj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2zv7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2pjj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2zv7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.99s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2pjj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2pjj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2zv7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-203274 -- exec busybox-7dff88458-x2zv7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.02s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (18.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-203274 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-203274 -v 3 --alsologtostderr: (17.4221862s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (18.10s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-203274 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp testdata/cp-test.txt multinode-203274:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2260189461/001/cp-test_multinode-203274.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274:/home/docker/cp-test.txt multinode-203274-m02:/home/docker/cp-test_multinode-203274_multinode-203274-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test_multinode-203274_multinode-203274-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274:/home/docker/cp-test.txt multinode-203274-m03:/home/docker/cp-test_multinode-203274_multinode-203274-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test_multinode-203274_multinode-203274-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp testdata/cp-test.txt multinode-203274-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2260189461/001/cp-test_multinode-203274-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m02:/home/docker/cp-test.txt multinode-203274:/home/docker/cp-test_multinode-203274-m02_multinode-203274.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test_multinode-203274-m02_multinode-203274.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m02:/home/docker/cp-test.txt multinode-203274-m03:/home/docker/cp-test_multinode-203274-m02_multinode-203274-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test_multinode-203274-m02_multinode-203274-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp testdata/cp-test.txt multinode-203274-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2260189461/001/cp-test_multinode-203274-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m03:/home/docker/cp-test.txt multinode-203274:/home/docker/cp-test_multinode-203274-m03_multinode-203274.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274 "sudo cat /home/docker/cp-test_multinode-203274-m03_multinode-203274.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 cp multinode-203274-m03:/home/docker/cp-test.txt multinode-203274-m02:/home/docker/cp-test_multinode-203274-m03_multinode-203274-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 ssh -n multinode-203274-m02 "sudo cat /home/docker/cp-test_multinode-203274-m03_multinode-203274-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.85s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-203274 node stop m03: (1.233556914s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203274 status: exit status 7 (515.202522ms)

                                                
                                                
-- stdout --
	multinode-203274
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203274-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203274-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr: exit status 7 (527.217563ms)

                                                
                                                
-- stdout --
	multinode-203274
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-203274-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-203274-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:55:21.095520  422594 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:55:21.095763  422594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:55:21.095791  422594 out.go:358] Setting ErrFile to fd 2...
	I0924 00:55:21.095813  422594 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:55:21.096108  422594 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:55:21.096348  422594 out.go:352] Setting JSON to false
	I0924 00:55:21.096410  422594 mustload.go:65] Loading cluster: multinode-203274
	I0924 00:55:21.096485  422594 notify.go:220] Checking for updates...
	I0924 00:55:21.097600  422594 config.go:182] Loaded profile config "multinode-203274": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:55:21.097661  422594 status.go:174] checking status of multinode-203274 ...
	I0924 00:55:21.098387  422594 cli_runner.go:164] Run: docker container inspect multinode-203274 --format={{.State.Status}}
	I0924 00:55:21.118645  422594 status.go:364] multinode-203274 host status = "Running" (err=<nil>)
	I0924 00:55:21.118668  422594 host.go:66] Checking if "multinode-203274" exists ...
	I0924 00:55:21.119123  422594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203274
	I0924 00:55:21.140790  422594 host.go:66] Checking if "multinode-203274" exists ...
	I0924 00:55:21.141121  422594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:55:21.141185  422594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203274
	I0924 00:55:21.165654  422594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/multinode-203274/id_rsa Username:docker}
	I0924 00:55:21.264205  422594 ssh_runner.go:195] Run: systemctl --version
	I0924 00:55:21.268608  422594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:55:21.280865  422594 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 00:55:21.351075  422594 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-24 00:55:21.339443493 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 00:55:21.351678  422594 kubeconfig.go:125] found "multinode-203274" server: "https://192.168.67.2:8443"
	I0924 00:55:21.351710  422594 api_server.go:166] Checking apiserver status ...
	I0924 00:55:21.351761  422594 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0924 00:55:21.363085  422594 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1365/cgroup
	I0924 00:55:21.372733  422594 api_server.go:182] apiserver freezer: "9:freezer:/docker/a0d40ac15facca03a38f93c2899e2734ad48fad7f86c3da0b25d09661aebb8ff/kubepods/burstable/pode40206ab26c30fca3714de6fe0fff1a5/9c92f2a3ef7e4b0d01d5df1c0b4ad7a70b845c19cec44b6a4217e6bc0b2d07ed"
	I0924 00:55:21.372826  422594 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a0d40ac15facca03a38f93c2899e2734ad48fad7f86c3da0b25d09661aebb8ff/kubepods/burstable/pode40206ab26c30fca3714de6fe0fff1a5/9c92f2a3ef7e4b0d01d5df1c0b4ad7a70b845c19cec44b6a4217e6bc0b2d07ed/freezer.state
	I0924 00:55:21.381769  422594 api_server.go:204] freezer state: "THAWED"
	I0924 00:55:21.381801  422594 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0924 00:55:21.389400  422594 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0924 00:55:21.389429  422594 status.go:456] multinode-203274 apiserver status = Running (err=<nil>)
	I0924 00:55:21.389449  422594 status.go:176] multinode-203274 status: &{Name:multinode-203274 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:55:21.389473  422594 status.go:174] checking status of multinode-203274-m02 ...
	I0924 00:55:21.389791  422594 cli_runner.go:164] Run: docker container inspect multinode-203274-m02 --format={{.State.Status}}
	I0924 00:55:21.407731  422594 status.go:364] multinode-203274-m02 host status = "Running" (err=<nil>)
	I0924 00:55:21.407759  422594 host.go:66] Checking if "multinode-203274-m02" exists ...
	I0924 00:55:21.408069  422594 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-203274-m02
	I0924 00:55:21.427612  422594 host.go:66] Checking if "multinode-203274-m02" exists ...
	I0924 00:55:21.427935  422594 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0924 00:55:21.427978  422594 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-203274-m02
	I0924 00:55:21.445821  422594 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33285 SSHKeyPath:/home/jenkins/minikube-integration/19696-296322/.minikube/machines/multinode-203274-m02/id_rsa Username:docker}
	I0924 00:55:21.536948  422594 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0924 00:55:21.550373  422594 status.go:176] multinode-203274-m02 status: &{Name:multinode-203274-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:55:21.550409  422594 status.go:174] checking status of multinode-203274-m03 ...
	I0924 00:55:21.550733  422594 cli_runner.go:164] Run: docker container inspect multinode-203274-m03 --format={{.State.Status}}
	I0924 00:55:21.568052  422594 status.go:364] multinode-203274-m03 host status = "Stopped" (err=<nil>)
	I0924 00:55:21.568076  422594 status.go:377] host is not running, skipping remaining checks
	I0924 00:55:21.568084  422594 status.go:176] multinode-203274-m03 status: &{Name:multinode-203274-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-203274 node start m03 -v=7 --alsologtostderr: (8.862888421s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203274
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-203274
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-203274: (24.97515499s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203274 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203274 --wait=true -v=8 --alsologtostderr: (1m17.581616649s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203274
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-203274 node delete m03: (5.037543672s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 stop
E0924 00:57:33.613768  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-203274 stop: (23.839159873s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203274 status: exit status 7 (100.588655ms)

                                                
                                                
-- stdout --
	multinode-203274
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203274-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr: exit status 7 (84.338679ms)

                                                
                                                
-- stdout --
	multinode-203274
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-203274-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 00:57:43.575925  431024 out.go:345] Setting OutFile to fd 1 ...
	I0924 00:57:43.576085  431024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:57:43.576097  431024 out.go:358] Setting ErrFile to fd 2...
	I0924 00:57:43.576103  431024 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 00:57:43.576349  431024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 00:57:43.576537  431024 out.go:352] Setting JSON to false
	I0924 00:57:43.576577  431024 mustload.go:65] Loading cluster: multinode-203274
	I0924 00:57:43.576678  431024 notify.go:220] Checking for updates...
	I0924 00:57:43.577028  431024 config.go:182] Loaded profile config "multinode-203274": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 00:57:43.577043  431024 status.go:174] checking status of multinode-203274 ...
	I0924 00:57:43.577560  431024 cli_runner.go:164] Run: docker container inspect multinode-203274 --format={{.State.Status}}
	I0924 00:57:43.596691  431024 status.go:364] multinode-203274 host status = "Stopped" (err=<nil>)
	I0924 00:57:43.596710  431024 status.go:377] host is not running, skipping remaining checks
	I0924 00:57:43.596717  431024 status.go:176] multinode-203274 status: &{Name:multinode-203274 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0924 00:57:43.596747  431024 status.go:174] checking status of multinode-203274-m02 ...
	I0924 00:57:43.597066  431024 cli_runner.go:164] Run: docker container inspect multinode-203274-m02 --format={{.State.Status}}
	I0924 00:57:43.616609  431024 status.go:364] multinode-203274-m02 host status = "Stopped" (err=<nil>)
	I0924 00:57:43.616628  431024 status.go:377] host is not running, skipping remaining checks
	I0924 00:57:43.616635  431024 status.go:176] multinode-203274-m02 status: &{Name:multinode-203274-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203274 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0924 00:57:45.682485  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203274 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.308901559s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-203274 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.98s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-203274
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203274-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-203274-m02 --driver=docker  --container-runtime=containerd: exit status 14 (71.188228ms)

                                                
                                                
-- stdout --
	* [multinode-203274-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-203274-m02' is duplicated with machine name 'multinode-203274-m02' in profile 'multinode-203274'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-203274-m03 --driver=docker  --container-runtime=containerd
E0924 00:58:56.675003  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-203274-m03 --driver=docker  --container-runtime=containerd: (32.437255176s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-203274
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-203274: exit status 80 (300.914331ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-203274 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-203274-m03 already exists in multinode-203274-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-203274-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-203274-m03: (1.985635414s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.85s)

                                                
                                    
x
+
TestPreload (114.41s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-867386 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-867386 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.410349004s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-867386 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-867386 image pull gcr.io/k8s-minikube/busybox: (2.018208139s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-867386
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-867386: (12.152304602s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-867386 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-867386 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.840702265s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-867386 image list
helpers_test.go:175: Cleaning up "test-preload-867386" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-867386
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-867386: (2.720990402s)
--- PASS: TestPreload (114.41s)

                                                
                                    
x
+
TestScheduledStopUnix (107.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-926238 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-926238 --memory=2048 --driver=docker  --container-runtime=containerd: (31.13468701s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-926238 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-926238 -n scheduled-stop-926238
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-926238 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0924 01:01:42.412657  301711 retry.go:31] will retry after 86.67µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.413766  301711 retry.go:31] will retry after 186.486µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.414922  301711 retry.go:31] will retry after 296.979µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.416038  301711 retry.go:31] will retry after 393.255µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.417194  301711 retry.go:31] will retry after 256.672µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.418343  301711 retry.go:31] will retry after 809.945µs: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.419510  301711 retry.go:31] will retry after 1.57396ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.421796  301711 retry.go:31] will retry after 2.118585ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.425066  301711 retry.go:31] will retry after 2.955352ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.428337  301711 retry.go:31] will retry after 4.475091ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.433600  301711 retry.go:31] will retry after 5.067062ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.438872  301711 retry.go:31] will retry after 4.413434ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.444328  301711 retry.go:31] will retry after 15.948449ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.462286  301711 retry.go:31] will retry after 26.620515ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
I0924 01:01:42.489504  301711 retry.go:31] will retry after 33.86422ms: open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/scheduled-stop-926238/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-926238 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-926238 -n scheduled-stop-926238
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-926238
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-926238 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0924 01:02:33.616787  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:02:45.683411  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-926238
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-926238: exit status 7 (68.401152ms)

                                                
                                                
-- stdout --
	scheduled-stop-926238
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-926238 -n scheduled-stop-926238
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-926238 -n scheduled-stop-926238: exit status 7 (65.464752ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-926238" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-926238
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-926238: (4.975615894s)
--- PASS: TestScheduledStopUnix (107.66s)

                                                
                                    
x
+
TestInsufficientStorage (10.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-372411 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-372411 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.860447899s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9cdaa758-d3d1-4225-bb6d-7a174d009699","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-372411] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"afae5999-c297-40e3-9df8-d03567c332d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19696"}}
	{"specversion":"1.0","id":"3a09d5b1-e251-4c3f-a0bf-f130f19c1b58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0064e969-59b5-4f79-9cad-b4f0d2ee639c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig"}}
	{"specversion":"1.0","id":"f9997773-cd89-4bd5-8e29-738720f0c883","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube"}}
	{"specversion":"1.0","id":"6660e8d5-ede1-4d4e-9d26-415244ff5f18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"274f9cda-8376-44bb-891f-b91c823be0ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2fca2aa-3e85-42b3-9e49-9f8e5f69bcbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"78eb0341-a39f-4cd9-9b52-9d4b75e87a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"e19eeab2-1154-482a-a7cf-a802b3dcee8c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"2f195c61-32a5-4a4f-8c0e-07fe3cd4ddb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"98e4ef4c-155e-4369-9355-6aeed9df5f7a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-372411\" primary control-plane node in \"insufficient-storage-372411\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"24398017-496e-483f-8dea-6d9fcc8a4f59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"46ea4511-0fda-46db-bc61-20218279158d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d59aa2e8-6b18-45cb-a106-00efbdf41322","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-372411 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-372411 --output=json --layout=cluster: exit status 7 (269.191277ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-372411","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-372411","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 01:03:06.532979  449521 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-372411" does not appear in /home/jenkins/minikube-integration/19696-296322/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-372411 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-372411 --output=json --layout=cluster: exit status 7 (280.467615ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-372411","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-372411","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0924 01:03:06.813258  449584 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-372411" does not appear in /home/jenkins/minikube-integration/19696-296322/kubeconfig
	E0924 01:03:06.823665  449584 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/insufficient-storage-372411/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-372411" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-372411
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-372411: (1.860853439s)
--- PASS: TestInsufficientStorage (10.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (97.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1049592337 start -p running-upgrade-491834 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1049592337 start -p running-upgrade-491834 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (51.517696979s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-491834 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0924 01:07:33.615890  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:07:45.683115  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-491834 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (42.951738414s)
helpers_test.go:175: Cleaning up "running-upgrade-491834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-491834
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-491834: (2.282416997s)
--- PASS: TestRunningBinaryUpgrade (97.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (107.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.193759656s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-234097
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-234097: (1.230754653s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-234097 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-234097 status --format={{.Host}}: exit status 7 (66.642903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.400041805s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-234097 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (116.021287ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-234097] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-234097
	    minikube start -p kubernetes-upgrade-234097 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2340972 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-234097 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-234097 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.380496488s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-234097" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-234097
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-234097: (3.162582015s)
--- PASS: TestKubernetesUpgrade (107.70s)

                                                
                                    
x
+
TestMissingContainerUpgrade (180.36s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2073185345 start -p missing-upgrade-277160 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2073185345 start -p missing-upgrade-277160 --memory=2200 --driver=docker  --container-runtime=containerd: (1m28.031487633s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-277160
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-277160: (10.298898922s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-277160
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-277160 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-277160 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m18.419718105s)
helpers_test.go:175: Cleaning up "missing-upgrade-277160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-277160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-277160: (2.622768624s)
--- PASS: TestMissingContainerUpgrade (180.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (79.76215ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-632776] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (39.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-632776 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-632776 --driver=docker  --container-runtime=containerd: (38.661903376s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-632776 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (39.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.524547772s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-632776 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-632776 status -o json: exit status 2 (302.787956ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-632776","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-632776
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-632776: (1.946429259s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-632776 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.387885274s)
--- PASS: TestNoKubernetes/serial/Start (8.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-632776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-632776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (328.099836ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-632776
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-632776: (1.254975534s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-632776 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-632776 --driver=docker  --container-runtime=containerd: (7.961963799s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-632776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-632776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (427.901462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4232983079 start -p stopped-upgrade-895471 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4232983079 start -p stopped-upgrade-895471 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (54.715413704s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4232983079 -p stopped-upgrade-895471 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4232983079 -p stopped-upgrade-895471 stop: (1.302156144s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-895471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-895471 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m2.111664893s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.13s)

                                                
                                    
x
+
TestPause/serial/Start (100.71s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-005476 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-005476 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m40.708944055s)
--- PASS: TestPause/serial/Start (100.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-895471
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-895471: (1.412279741s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-773635 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-773635 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (180.155285ms)

                                                
                                                
-- stdout --
	* [false-773635] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19696
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0924 01:08:52.685502  484352 out.go:345] Setting OutFile to fd 1 ...
	I0924 01:08:52.685702  484352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:08:52.685727  484352 out.go:358] Setting ErrFile to fd 2...
	I0924 01:08:52.685754  484352 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0924 01:08:52.686015  484352 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19696-296322/.minikube/bin
	I0924 01:08:52.686533  484352 out.go:352] Setting JSON to false
	I0924 01:08:52.687838  484352 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10278,"bootTime":1727129855,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0924 01:08:52.687941  484352 start.go:139] virtualization:  
	I0924 01:08:52.692598  484352 out.go:177] * [false-773635] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0924 01:08:52.694892  484352 out.go:177]   - MINIKUBE_LOCATION=19696
	I0924 01:08:52.695004  484352 notify.go:220] Checking for updates...
	I0924 01:08:52.699517  484352 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0924 01:08:52.703259  484352 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19696-296322/kubeconfig
	I0924 01:08:52.705333  484352 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19696-296322/.minikube
	I0924 01:08:52.709066  484352 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0924 01:08:52.711603  484352 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0924 01:08:52.714427  484352 config.go:182] Loaded profile config "pause-005476": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0924 01:08:52.714586  484352 driver.go:394] Setting default libvirt URI to qemu:///system
	I0924 01:08:52.736574  484352 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0924 01:08:52.736789  484352 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0924 01:08:52.795166  484352 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-24 01:08:52.784764239 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0924 01:08:52.795280  484352 docker.go:318] overlay module found
	I0924 01:08:52.799041  484352 out.go:177] * Using the docker driver based on user configuration
	I0924 01:08:52.801154  484352 start.go:297] selected driver: docker
	I0924 01:08:52.801171  484352 start.go:901] validating driver "docker" against <nil>
	I0924 01:08:52.801185  484352 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0924 01:08:52.803527  484352 out.go:201] 
	W0924 01:08:52.805459  484352 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0924 01:08:52.807623  484352 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-773635 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-005476
contexts:
- context:
cluster: pause-005476
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005476
name: pause-005476
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-005476
user:
client-certificate: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.crt
client-key: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-773635

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-773635"

                                                
                                                
----------------------- debugLogs end: false-773635 [took: 3.333097759s] --------------------------------
helpers_test.go:175: Cleaning up "false-773635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-773635
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-005476 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-005476 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.580079139s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.60s)

                                                
                                    
x
+
TestPause/serial/Pause (0.88s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-005476 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.88s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.35s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-005476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-005476 --output=json --layout=cluster: exit status 2 (354.358446ms)

                                                
                                                
-- stdout --
	{"Name":"pause-005476","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-005476","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.35s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.11s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-005476 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-005476 --alsologtostderr -v=5: (1.11452481s)
--- PASS: TestPause/serial/Unpause (1.11s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.14s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-005476 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-005476 --alsologtostderr -v=5: (1.137623991s)
--- PASS: TestPause/serial/PauseAgain (1.14s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.73s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-005476 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-005476 --alsologtostderr -v=5: (2.725186s)
--- PASS: TestPause/serial/DeletePaused (2.73s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-005476
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-005476: exit status 1 (22.311014ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-005476: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (179.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-654890 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0924 01:10:48.752068  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:12:33.614202  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:12:45.682453  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-654890 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m59.363737572s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (179.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-558135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-558135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m13.853206814s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-654890 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f7b560eb-705e-460c-9502-970496a0acbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f7b560eb-705e-460c-9502-970496a0acbc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.006015907s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-654890 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-654890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-654890 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.237760087s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-654890 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-654890 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-654890 --alsologtostderr -v=3: (13.626378012s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-654890 -n old-k8s-version-654890
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-654890 -n old-k8s-version-654890: exit status 7 (88.130378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-654890 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-558135 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [14d25774-a030-4b0d-9ffb-7247cddda279] Pending
helpers_test.go:344: "busybox" [14d25774-a030-4b0d-9ffb-7247cddda279] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [14d25774-a030-4b0d-9ffb-7247cddda279] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003016671s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-558135 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-558135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-558135 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.072645018s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-558135 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-558135 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-558135 --alsologtostderr -v=3: (12.168889464s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-558135 -n no-preload-558135
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-558135 -n no-preload-558135: exit status 7 (72.585649ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-558135 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (289.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-558135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 01:15:36.676394  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:17:33.613362  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:17:45.683076  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-558135 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m48.867745708s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-558135 -n no-preload-558135
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (289.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r7gd7" [e1800a32-4b14-4801-b2b7-f34e583a52d8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.030112197s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-r7gd7" [e1800a32-4b14-4801-b2b7-f34e583a52d8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003620362s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-558135 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-558135 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-558135 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-558135 -n no-preload-558135
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-558135 -n no-preload-558135: exit status 2 (331.163941ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-558135 -n no-preload-558135
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-558135 -n no-preload-558135: exit status 2 (321.560968ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-558135 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-558135 -n no-preload-558135
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-558135 -n no-preload-558135
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-456459 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-456459 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m25.083385342s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h24mv" [e9a6f8bd-4e13-4495-8e18-a1c8fd57ea76] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004212468s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-h24mv" [e9a6f8bd-4e13-4495-8e18-a1c8fd57ea76] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019548086s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-654890 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-654890 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-654890 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-654890 --alsologtostderr -v=1: (1.048281933s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-654890 -n old-k8s-version-654890
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-654890 -n old-k8s-version-654890: exit status 2 (363.417765ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-654890 -n old-k8s-version-654890
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-654890 -n old-k8s-version-654890: exit status 2 (425.151632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-654890 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-654890 -n old-k8s-version-654890
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-654890 -n old-k8s-version-654890
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-944486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-944486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m7.142470009s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (67.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-456459 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6477cb6a-6f24-4af1-8d39-8a159f6eee42] Pending
helpers_test.go:344: "busybox" [6477cb6a-6f24-4af1-8d39-8a159f6eee42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6477cb6a-6f24-4af1-8d39-8a159f6eee42] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003461984s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-456459 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-944486 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0341614d-42a7-453b-b50d-55f35693579d] Pending
helpers_test.go:344: "busybox" [0341614d-42a7-453b-b50d-55f35693579d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0341614d-42a7-453b-b50d-55f35693579d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004216714s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-944486 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-456459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-456459 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.051759564s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-456459 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-456459 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-456459 --alsologtostderr -v=3: (12.1075403s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-944486 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-944486 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-944486 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-944486 --alsologtostderr -v=3: (12.063348978s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-456459 -n embed-certs-456459
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-456459 -n embed-certs-456459: exit status 7 (70.389068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-456459 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (271.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-456459 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-456459 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m31.295398396s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-456459 -n embed-certs-456459
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (271.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486: exit status 7 (74.994439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-944486 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-944486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 01:22:33.613740  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:22:45.683105  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.532474  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.538985  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.550463  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.571886  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.613568  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.695071  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:29.856953  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:30.178589  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:30.820120  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:32.102185  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:34.663597  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:39.785235  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:23:50.026737  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:10.509354  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.344629  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.351197  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.362697  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.384335  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.425790  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.507385  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.668893  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:42.991520  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:43.633602  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:44.915111  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:47.476775  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:51.471064  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:24:52.598282  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:25:02.840099  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:25:23.321501  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:04.283722  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:26:13.393338  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-944486 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m32.085293208s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x4f26" [de4a3a8e-cf1a-438d-acb5-3ea0ecfeaea8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003185035s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-flhw9" [1d4044d4-85b6-4e02-b87f-0e2dfabf11e8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006057641s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x4f26" [de4a3a8e-cf1a-438d-acb5-3ea0ecfeaea8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004221983s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-456459 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-flhw9" [1d4044d4-85b6-4e02-b87f-0e2dfabf11e8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006596339s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-944486 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-456459 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-456459 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-456459 -n embed-certs-456459
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-456459 -n embed-certs-456459: exit status 2 (493.257898ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-456459 -n embed-certs-456459
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-456459 -n embed-certs-456459: exit status 2 (326.415585ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-456459 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-456459 -n embed-certs-456459
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-456459 -n embed-certs-456459
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-944486 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-944486 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-944486 --alsologtostderr -v=1: (1.077595587s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486: exit status 2 (397.375396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486: exit status 2 (379.611208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-944486 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-944486 --alsologtostderr -v=1: (1.029935109s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-944486 -n default-k8s-diff-port-944486
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-535831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-535831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (45.919985915s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0924 01:27:26.205726  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:28.753984  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:27:33.613620  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m31.651517407s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-535831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-535831 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.349725274s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-535831 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-535831 --alsologtostderr -v=3: (1.334771323s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535831 -n newest-cni-535831
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535831 -n newest-cni-535831: exit status 7 (141.506922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-535831 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-535831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0924 01:27:45.682692  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/addons-321431/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-535831 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (16.663372726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-535831 -n newest-cni-535831
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-535831 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.07s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-535831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535831 -n newest-cni-535831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535831 -n newest-cni-535831: exit status 2 (311.038301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535831 -n newest-cni-535831
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535831 -n newest-cni-535831: exit status 2 (298.668371ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-535831 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-535831 -n newest-cni-535831
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-535831 -n newest-cni-535831
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.07s)
E0924 01:33:03.810270  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:29.532442  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.019176  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.025612  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.036983  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.058362  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.099841  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.181241  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.342721  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:31.664430  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:32.306771  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:33.588486  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:36.149858  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:41.272026  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:33:51.513550  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:04.039006  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (61.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0924 01:28:29.532595  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/old-k8s-version-654890/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m1.602393048s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (61.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-773635 "pgrep -a kubelet"
I0924 01:28:30.598576  301711 config.go:182] Loaded profile config "auto-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-773635 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5qkkh" [6f70cfc0-6def-468f-b2ed-131c19e7c2a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5qkkh" [6f70cfc0-6def-468f-b2ed-131c19e7c2a3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004325906s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.430449512s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qcwcf" [76bdb813-01b9-4d66-8b66-87fda45a59a6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004784936s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-773635 "pgrep -a kubelet"
I0924 01:29:10.392308  301711 config.go:182] Loaded profile config "kindnet-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-773635 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tbvsp" [e4239281-6716-4fb1-a962-f12d129cd871] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tbvsp" [e4239281-6716-4fb1-a962-f12d129cd871] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004236812s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0924 01:30:10.047176  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/no-preload-558135/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.130632358s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-rpfkc" [0189d53d-8dee-4b27-8669-2766cbde3b7a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006699915s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-773635 "pgrep -a kubelet"
I0924 01:30:20.057274  301711 config.go:182] Loaded profile config "calico-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-773635 replace --force -f testdata/netcat-deployment.yaml
I0924 01:30:20.436513  301711 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-clnp6" [1ca31b30-a9d5-47f0-8151-8e4c22a2448a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-clnp6" [1ca31b30-a9d5-47f0-8151-8e4c22a2448a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004580142s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-773635 "pgrep -a kubelet"
I0924 01:30:43.127744  301711 config.go:182] Loaded profile config "custom-flannel-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-773635 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z97lp" [7df3dad4-65ff-4afc-8869-9feb2187d068] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z97lp" [7df3dad4-65ff-4afc-8869-9feb2187d068] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004105625s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m22.048965261s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0924 01:31:41.872564  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:41.879010  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:41.890381  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:41.911730  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:41.953183  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:42.034612  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:42.196077  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:42.517665  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:43.159631  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:44.440888  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:47.002256  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:31:52.124454  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:32:02.366402  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.24632397s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7qcpm" [1aa3f8a0-af29-4a74-8e40-320804e31a68] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00416354s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-773635 "pgrep -a kubelet"
I0924 01:32:16.268041  301711 config.go:182] Loaded profile config "enable-default-cni-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-773635 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cdgkh" [45228ff1-06df-497c-b16c-5e3bea38fa3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 01:32:16.677864  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/functional-346828/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-cdgkh" [45228ff1-06df-497c-b16c-5e3bea38fa3f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.00317247s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-773635 "pgrep -a kubelet"
I0924 01:32:20.935241  301711 config.go:182] Loaded profile config "flannel-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-773635 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hczt9" [fa050664-dc05-464d-83b9-19b1e48e81ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 01:32:22.848748  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/default-k8s-diff-port-944486/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-hczt9" [fa050664-dc05-464d-83b9-19b1e48e81ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003371936s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-773635 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.115915681s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-773635 "pgrep -a kubelet"
E0924 01:34:04.051442  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:04.062835  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:04.084425  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:04.125822  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:04.215015  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
I0924 01:34:04.327947  301711 config.go:182] Loaded profile config "bridge-773635": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-773635 replace --force -f testdata/netcat-deployment.yaml
E0924 01:34:04.376681  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t49z7" [9a4994fa-3b4a-4954-b0ce-e864c7a45e33] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0924 01:34:04.698365  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:05.340483  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:06.621961  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t49z7" [9a4994fa-3b4a-4954-b0ce-e864c7a45e33] Running
E0924 01:34:09.183531  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/kindnet-773635/client.crt: no such file or directory" logger="UnhandledError"
E0924 01:34:11.994976  301711 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/auto-773635/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003697629s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-773635 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-773635 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-765331 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-765331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-765331
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-631890" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-631890
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-773635 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-005476
contexts:
- context:
cluster: pause-005476
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005476
name: pause-005476
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-005476
user:
client-certificate: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.crt
client-key: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-773635

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-773635"

                                                
                                                
----------------------- debugLogs end: kubenet-773635 [took: 3.404457493s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-773635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-773635
--- SKIP: TestNetworkPlugins/group/kubenet (3.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-773635 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-773635" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19696-296322/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-005476
contexts:
- context:
cluster: pause-005476
extensions:
- extension:
last-update: Tue, 24 Sep 2024 01:08:37 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-005476
name: pause-005476
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-005476
user:
client-certificate: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.crt
client-key: /home/jenkins/minikube-integration/19696-296322/.minikube/profiles/pause-005476/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-773635

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-773635" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-773635"

                                                
                                                
----------------------- debugLogs end: cilium-773635 [took: 4.079638811s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-773635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-773635
--- SKIP: TestNetworkPlugins/group/cilium (4.26s)

                                                
                                    
Copied to clipboard