Test Report: Docker_Linux_containerd_arm64 19678

                    
                      8ef5536409705b0cbf1ed8a719bbf7f792426b16:2024-09-20:36299
                    
                

Test fail (2/327)

Order failed test Duration
29 TestAddons/serial/Volcano 200.13
301 TestStartStop/group/old-k8s-version/serial/SecondStart 377.84
x
+
TestAddons/serial/Volcano (200.13s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 42.521045ms
addons_test.go:851: volcano-controller stabilized in 43.096329ms
addons_test.go:843: volcano-admission stabilized in 43.717881ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-g72p9" [c20df0f9-70ae-4065-9e2a-2ce7f023eb7a] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003346619s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-p5jsb" [aaebdad8-eeb4-4906-b434-34e6e92794cd] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004366308s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-cmlsq" [ca1b28f8-43f4-44f1-b114-6ebf014d0673] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.009849071s
addons_test.go:870: (dbg) Run:  kubectl --context addons-388835 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-388835 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-388835 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [c8c93cbf-e3c7-4aac-aae5-9d2e224d43d2] Pending
helpers_test.go:344: "test-job-nginx-0" [c8c93cbf-e3c7-4aac-aae5-9d2e224d43d2] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:902: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:902: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-388835 -n addons-388835
addons_test.go:902: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-20 19:30:01.289884092 +0000 UTC m=+438.459971836
addons_test.go:902: (dbg) Run:  kubectl --context addons-388835 describe po test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-388835 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-32b524fe-4427-453e-b10c-20475d16c577
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qltmv (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-qltmv:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:902: (dbg) Run:  kubectl --context addons-388835 logs test-job-nginx-0 -n my-volcano
addons_test.go:902: (dbg) kubectl --context addons-388835 logs test-job-nginx-0 -n my-volcano:
addons_test.go:903: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-388835
helpers_test.go:235: (dbg) docker inspect addons-388835:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec",
	        "Created": "2024-09-20T19:23:28.927825122Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 741050,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T19:23:29.06326866Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec/hosts",
	        "LogPath": "/var/lib/docker/containers/9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec/9a53efdf4d85976badca27555f38139bc7aadf5c58ed071d3bac84f814fdb9ec-json.log",
	        "Name": "/addons-388835",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-388835:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-388835",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/76a449cf6aea898a860397dcf5d4630c42c13c407740f4ac04feadd1277c004e-init/diff:/var/lib/docker/overlay2/0eebc2dd792544f9be347ae96aac5eeb2f1e9299f1fe8e5c7ced4da8d5f2fc78/diff",
	                "MergedDir": "/var/lib/docker/overlay2/76a449cf6aea898a860397dcf5d4630c42c13c407740f4ac04feadd1277c004e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/76a449cf6aea898a860397dcf5d4630c42c13c407740f4ac04feadd1277c004e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/76a449cf6aea898a860397dcf5d4630c42c13c407740f4ac04feadd1277c004e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-388835",
	                "Source": "/var/lib/docker/volumes/addons-388835/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-388835",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-388835",
	                "name.minikube.sigs.k8s.io": "addons-388835",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61d555fac9805602c5f8870a97f82a3d8d75055b8328f68cc7164d26fd4128d9",
	            "SandboxKey": "/var/run/docker/netns/61d555fac980",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-388835": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b2fcf60b9c3a62191b5e9af5f7109d89a173ea744446443b8b238c87f4ff56ee",
	                    "EndpointID": "28a83df2c0a6933dffffc76f56eba2b7eb1c0933a9270463a56ac3da4d60660d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-388835",
	                        "9a53efdf4d85"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-388835 -n addons-388835
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 logs -n 25: (1.654144941s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-790946   | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC |                     |
	|         | -p download-only-790946              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| delete  | -p download-only-790946              | download-only-790946   | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| start   | -o=json --download-only              | download-only-509320   | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC |                     |
	|         | -p download-only-509320              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| delete  | -p download-only-509320              | download-only-509320   | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| delete  | -p download-only-790946              | download-only-790946   | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| delete  | -p download-only-509320              | download-only-509320   | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| start   | --download-only -p                   | download-docker-973006 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	|         | download-docker-973006               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-973006            | download-docker-973006 | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| start   | --download-only -p                   | binary-mirror-080065   | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	|         | binary-mirror-080065                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:33183               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-080065              | binary-mirror-080065   | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:23 UTC |
	| addons  | enable dashboard -p                  | addons-388835          | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	|         | addons-388835                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-388835          | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC |                     |
	|         | addons-388835                        |                        |         |         |                     |                     |
	| start   | -p addons-388835 --wait=true         | addons-388835          | jenkins | v1.34.0 | 20 Sep 24 19:23 UTC | 20 Sep 24 19:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:23:04
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:23:04.436755  740558 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:23:04.436963  740558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:23:04.436989  740558 out.go:358] Setting ErrFile to fd 2...
	I0920 19:23:04.437009  740558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:23:04.437313  740558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:23:04.437815  740558 out.go:352] Setting JSON to false
	I0920 19:23:04.438759  740558 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11136,"bootTime":1726849049,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:23:04.438859  740558 start.go:139] virtualization:  
	I0920 19:23:04.441385  740558 out.go:177] * [addons-388835] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:23:04.443865  740558 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:23:04.444024  740558 notify.go:220] Checking for updates...
	I0920 19:23:04.448156  740558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:23:04.450465  740558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:23:04.452591  740558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 19:23:04.454663  740558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:23:04.456978  740558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:23:04.459267  740558 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:23:04.487844  740558 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:23:04.487971  740558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:23:04.540025  740558 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:23:04.530610821 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:23:04.540146  740558 docker.go:318] overlay module found
	I0920 19:23:04.543555  740558 out.go:177] * Using the docker driver based on user configuration
	I0920 19:23:04.545397  740558 start.go:297] selected driver: docker
	I0920 19:23:04.545419  740558 start.go:901] validating driver "docker" against <nil>
	I0920 19:23:04.545434  740558 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:23:04.546065  740558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:23:04.592721  740558 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:23:04.583018874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:23:04.592947  740558 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:23:04.593183  740558 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:23:04.595221  740558 out.go:177] * Using Docker driver with root privileges
	I0920 19:23:04.597155  740558 cni.go:84] Creating CNI manager for ""
	I0920 19:23:04.597234  740558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:23:04.597249  740558 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:23:04.597346  740558 start.go:340] cluster config:
	{Name:addons-388835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-388835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:23:04.601255  740558 out.go:177] * Starting "addons-388835" primary control-plane node in "addons-388835" cluster
	I0920 19:23:04.603433  740558 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 19:23:04.605516  740558 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:23:04.607412  740558 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:23:04.607466  740558 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 19:23:04.607478  740558 cache.go:56] Caching tarball of preloaded images
	I0920 19:23:04.607483  740558 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:23:04.607561  740558 preload.go:172] Found /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 19:23:04.607571  740558 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 19:23:04.607930  740558 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/config.json ...
	I0920 19:23:04.607957  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/config.json: {Name:mk462ac17cf9e2648c3ce9b6743729c7401edbca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:04.634642  740558 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:23:04.634754  740558 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:23:04.634773  740558 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:23:04.634778  740558 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:23:04.634785  740558 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:23:04.634791  740558 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 19:23:21.888814  740558 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 19:23:21.888856  740558 cache.go:194] Successfully downloaded all kic artifacts
	I0920 19:23:21.888900  740558 start.go:360] acquireMachinesLock for addons-388835: {Name:mk7beff3bd5d208d001e832367d92cb2d380a9e0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 19:23:21.889045  740558 start.go:364] duration metric: took 121.305µs to acquireMachinesLock for "addons-388835"
	I0920 19:23:21.889077  740558 start.go:93] Provisioning new machine with config: &{Name:addons-388835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-388835 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 19:23:21.889158  740558 start.go:125] createHost starting for "" (driver="docker")
	I0920 19:23:21.891366  740558 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0920 19:23:21.891609  740558 start.go:159] libmachine.API.Create for "addons-388835" (driver="docker")
	I0920 19:23:21.891644  740558 client.go:168] LocalClient.Create starting
	I0920 19:23:21.891774  740558 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem
	I0920 19:23:22.142260  740558 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem
	I0920 19:23:22.618533  740558 cli_runner.go:164] Run: docker network inspect addons-388835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 19:23:22.634221  740558 cli_runner.go:211] docker network inspect addons-388835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 19:23:22.634336  740558 network_create.go:284] running [docker network inspect addons-388835] to gather additional debugging logs...
	I0920 19:23:22.634359  740558 cli_runner.go:164] Run: docker network inspect addons-388835
	W0920 19:23:22.649521  740558 cli_runner.go:211] docker network inspect addons-388835 returned with exit code 1
	I0920 19:23:22.649554  740558 network_create.go:287] error running [docker network inspect addons-388835]: docker network inspect addons-388835: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-388835 not found
	I0920 19:23:22.649569  740558 network_create.go:289] output of [docker network inspect addons-388835]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-388835 not found
	
	** /stderr **
	I0920 19:23:22.649664  740558 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:23:22.665713  740558 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c329c0}
	I0920 19:23:22.665759  740558 network_create.go:124] attempt to create docker network addons-388835 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0920 19:23:22.665821  740558 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-388835 addons-388835
	I0920 19:23:22.739010  740558 network_create.go:108] docker network addons-388835 192.168.49.0/24 created
	I0920 19:23:22.739045  740558 kic.go:121] calculated static IP "192.168.49.2" for the "addons-388835" container
	I0920 19:23:22.739123  740558 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 19:23:22.756920  740558 cli_runner.go:164] Run: docker volume create addons-388835 --label name.minikube.sigs.k8s.io=addons-388835 --label created_by.minikube.sigs.k8s.io=true
	I0920 19:23:22.775170  740558 oci.go:103] Successfully created a docker volume addons-388835
	I0920 19:23:22.775269  740558 cli_runner.go:164] Run: docker run --rm --name addons-388835-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-388835 --entrypoint /usr/bin/test -v addons-388835:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 19:23:24.823117  740558 cli_runner.go:217] Completed: docker run --rm --name addons-388835-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-388835 --entrypoint /usr/bin/test -v addons-388835:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.047798453s)
	I0920 19:23:24.823147  740558 oci.go:107] Successfully prepared a docker volume addons-388835
	I0920 19:23:24.823183  740558 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:23:24.823204  740558 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 19:23:24.823276  740558 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-388835:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 19:23:28.868256  740558 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-388835:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.044935438s)
	I0920 19:23:28.868290  740558 kic.go:203] duration metric: took 4.045082137s to extract preloaded images to volume ...
	W0920 19:23:28.868439  740558 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 19:23:28.868551  740558 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 19:23:28.913253  740558 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-388835 --name addons-388835 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-388835 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-388835 --network addons-388835 --ip 192.168.49.2 --volume addons-388835:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 19:23:29.245157  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Running}}
	I0920 19:23:29.270042  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:23:29.291926  740558 cli_runner.go:164] Run: docker exec addons-388835 stat /var/lib/dpkg/alternatives/iptables
	I0920 19:23:29.364286  740558 oci.go:144] the created container "addons-388835" has a running status.
	I0920 19:23:29.364315  740558 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa...
	I0920 19:23:30.212661  740558 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 19:23:30.239667  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:23:30.267682  740558 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 19:23:30.267710  740558 kic_runner.go:114] Args: [docker exec --privileged addons-388835 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 19:23:30.335537  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:23:30.354696  740558 machine.go:93] provisionDockerMachine start ...
	I0920 19:23:30.354789  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:30.373971  740558 main.go:141] libmachine: Using SSH client type: native
	I0920 19:23:30.374269  740558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0920 19:23:30.374280  740558 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 19:23:30.521966  740558 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388835
	
	I0920 19:23:30.522000  740558 ubuntu.go:169] provisioning hostname "addons-388835"
	I0920 19:23:30.522064  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:30.546108  740558 main.go:141] libmachine: Using SSH client type: native
	I0920 19:23:30.546403  740558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0920 19:23:30.546420  740558 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-388835 && echo "addons-388835" | sudo tee /etc/hostname
	I0920 19:23:30.702538  740558 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-388835
	
	I0920 19:23:30.702667  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:30.719921  740558 main.go:141] libmachine: Using SSH client type: native
	I0920 19:23:30.720174  740558 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0920 19:23:30.720192  740558 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-388835' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-388835/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-388835' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 19:23:30.862353  740558 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 19:23:30.862380  740558 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-734403/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-734403/.minikube}
	I0920 19:23:30.862400  740558 ubuntu.go:177] setting up certificates
	I0920 19:23:30.862411  740558 provision.go:84] configureAuth start
	I0920 19:23:30.862486  740558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-388835
	I0920 19:23:30.879234  740558 provision.go:143] copyHostCerts
	I0920 19:23:30.879321  740558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem (1078 bytes)
	I0920 19:23:30.879439  740558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem (1123 bytes)
	I0920 19:23:30.879502  740558 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem (1679 bytes)
	I0920 19:23:30.879550  740558 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem org=jenkins.addons-388835 san=[127.0.0.1 192.168.49.2 addons-388835 localhost minikube]
	I0920 19:23:31.388437  740558 provision.go:177] copyRemoteCerts
	I0920 19:23:31.388506  740558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 19:23:31.388552  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:31.405620  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:23:31.507196  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 19:23:31.532693  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 19:23:31.556797  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0920 19:23:31.581070  740558 provision.go:87] duration metric: took 718.645379ms to configureAuth
	I0920 19:23:31.581097  740558 ubuntu.go:193] setting minikube options for container-runtime
	I0920 19:23:31.581284  740558 config.go:182] Loaded profile config "addons-388835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:23:31.581293  740558 machine.go:96] duration metric: took 1.226578804s to provisionDockerMachine
	I0920 19:23:31.581300  740558 client.go:171] duration metric: took 9.689646398s to LocalClient.Create
	I0920 19:23:31.581314  740558 start.go:167] duration metric: took 9.68970678s to libmachine.API.Create "addons-388835"
	I0920 19:23:31.581321  740558 start.go:293] postStartSetup for "addons-388835" (driver="docker")
	I0920 19:23:31.581332  740558 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 19:23:31.581382  740558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 19:23:31.581435  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:31.597789  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:23:31.699725  740558 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 19:23:31.702742  740558 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 19:23:31.702780  740558 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 19:23:31.702796  740558 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 19:23:31.702803  740558 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 19:23:31.702813  740558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/addons for local assets ...
	I0920 19:23:31.702877  740558 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/files for local assets ...
	I0920 19:23:31.702905  740558 start.go:296] duration metric: took 121.577809ms for postStartSetup
	I0920 19:23:31.703203  740558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-388835
	I0920 19:23:31.719117  740558 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/config.json ...
	I0920 19:23:31.719406  740558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:23:31.719465  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:31.735613  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:23:31.831157  740558 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 19:23:31.835447  740558 start.go:128] duration metric: took 9.946273286s to createHost
	I0920 19:23:31.835469  740558 start.go:83] releasing machines lock for "addons-388835", held for 9.946409145s
	I0920 19:23:31.835540  740558 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-388835
	I0920 19:23:31.851439  740558 ssh_runner.go:195] Run: cat /version.json
	I0920 19:23:31.851484  740558 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 19:23:31.851493  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:31.851558  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:23:31.872182  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:23:31.888858  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:23:32.097089  740558 ssh_runner.go:195] Run: systemctl --version
	I0920 19:23:32.101497  740558 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 19:23:32.105879  740558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 19:23:32.131440  740558 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 19:23:32.131523  740558 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 19:23:32.162604  740558 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 19:23:32.162631  740558 start.go:495] detecting cgroup driver to use...
	I0920 19:23:32.162668  740558 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 19:23:32.162723  740558 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 19:23:32.176123  740558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 19:23:32.188301  740558 docker.go:217] disabling cri-docker service (if available) ...
	I0920 19:23:32.188379  740558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 19:23:32.203140  740558 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 19:23:32.217892  740558 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 19:23:32.299833  740558 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 19:23:32.388089  740558 docker.go:233] disabling docker service ...
	I0920 19:23:32.388193  740558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 19:23:32.413022  740558 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 19:23:32.425104  740558 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 19:23:32.519435  740558 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 19:23:32.607270  740558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 19:23:32.625415  740558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 19:23:32.642481  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 19:23:32.653251  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 19:23:32.663393  740558 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 19:23:32.663481  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 19:23:32.673829  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:23:32.684047  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 19:23:32.694681  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 19:23:32.704967  740558 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 19:23:32.714521  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 19:23:32.724752  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 19:23:32.734731  740558 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 19:23:32.744636  740558 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 19:23:32.753445  740558 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 19:23:32.762141  740558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:23:32.852013  740558 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 19:23:32.987519  740558 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 19:23:32.987612  740558 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 19:23:32.991224  740558 start.go:563] Will wait 60s for crictl version
	I0920 19:23:32.991292  740558 ssh_runner.go:195] Run: which crictl
	I0920 19:23:32.995192  740558 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 19:23:33.038517  740558 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 19:23:33.038664  740558 ssh_runner.go:195] Run: containerd --version
	I0920 19:23:33.064991  740558 ssh_runner.go:195] Run: containerd --version
	I0920 19:23:33.092708  740558 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0920 19:23:33.094708  740558 cli_runner.go:164] Run: docker network inspect addons-388835 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 19:23:33.110504  740558 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0920 19:23:33.114263  740558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:23:33.125886  740558 kubeadm.go:883] updating cluster {Name:addons-388835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-388835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 19:23:33.126017  740558 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:23:33.126084  740558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:23:33.166966  740558 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:23:33.166993  740558 containerd.go:534] Images already preloaded, skipping extraction
	I0920 19:23:33.167058  740558 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 19:23:33.205866  740558 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 19:23:33.205889  740558 cache_images.go:84] Images are preloaded, skipping loading
	I0920 19:23:33.205897  740558 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0920 19:23:33.205997  740558 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-388835 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-388835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 19:23:33.206077  740558 ssh_runner.go:195] Run: sudo crictl info
	I0920 19:23:33.245586  740558 cni.go:84] Creating CNI manager for ""
	I0920 19:23:33.245614  740558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:23:33.245624  740558 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 19:23:33.245647  740558 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-388835 NodeName:addons-388835 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 19:23:33.245789  740558 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-388835"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 19:23:33.245866  740558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 19:23:33.254764  740558 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 19:23:33.254843  740558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 19:23:33.263733  740558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0920 19:23:33.282266  740558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 19:23:33.300583  740558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0920 19:23:33.319023  740558 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0920 19:23:33.322468  740558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 19:23:33.333186  740558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:23:33.418887  740558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:23:33.434726  740558 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835 for IP: 192.168.49.2
	I0920 19:23:33.434790  740558 certs.go:194] generating shared ca certs ...
	I0920 19:23:33.434822  740558 certs.go:226] acquiring lock for ca certs: {Name:mk05671cd2fa7cea0f374261a29f5dc2649893f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:33.434981  740558 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key
	I0920 19:23:33.897785  740558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt ...
	I0920 19:23:33.897822  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt: {Name:mk3f0b0561ef9f81a8ad86fee28427842f7aa49e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:33.898051  740558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key ...
	I0920 19:23:33.898069  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key: {Name:mk90fbf5730357da558f41bafc65be68811c044b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:33.898165  740558 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key
	I0920 19:23:34.830889  740558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.crt ...
	I0920 19:23:34.830921  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.crt: {Name:mk576efe4a422f2d9f2bfd827139083dc97c7f1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:34.831111  740558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key ...
	I0920 19:23:34.831127  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key: {Name:mk75a72e50a8a1d7122c957673106d9796941347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:34.831204  740558 certs.go:256] generating profile certs ...
	I0920 19:23:34.831266  740558 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.key
	I0920 19:23:34.831295  740558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt with IP's: []
	I0920 19:23:35.071396  740558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt ...
	I0920 19:23:35.071437  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: {Name:mk48628d3e84cf6f1b62f77b06141d58313d83d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.071645  740558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.key ...
	I0920 19:23:35.071662  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.key: {Name:mk1703d2a9f0fd893397b2abe561c1e0c0a2c907 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.072450  740558 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key.04aba3af
	I0920 19:23:35.072485  740558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt.04aba3af with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0920 19:23:35.565916  740558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt.04aba3af ...
	I0920 19:23:35.565951  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt.04aba3af: {Name:mkf63ea4c08d857ad01bcf19d81711811e42293b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.566137  740558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key.04aba3af ...
	I0920 19:23:35.566150  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key.04aba3af: {Name:mk38bfcc14cf42241effbe561ab13769ac8bd207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.566846  740558 certs.go:381] copying /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt.04aba3af -> /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt
	I0920 19:23:35.566936  740558 certs.go:385] copying /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key.04aba3af -> /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key
	I0920 19:23:35.566989  740558 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.key
	I0920 19:23:35.567011  740558 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.crt with IP's: []
	I0920 19:23:35.846819  740558 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.crt ...
	I0920 19:23:35.846854  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.crt: {Name:mk340b253504dbcfa902063cb71f4387a6e8ad46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.847599  740558 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.key ...
	I0920 19:23:35.847621  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.key: {Name:mkc670dbee9c05798da1efc86a90a2f5c13e2732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:35.847831  740558 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 19:23:35.847871  740558 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem (1078 bytes)
	I0920 19:23:35.847900  740558 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem (1123 bytes)
	I0920 19:23:35.847928  740558 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem (1679 bytes)
	I0920 19:23:35.848637  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 19:23:35.877172  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 19:23:35.902920  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 19:23:35.928513  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 19:23:35.952479  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0920 19:23:35.977153  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0920 19:23:36.010525  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 19:23:36.038051  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 19:23:36.064210  740558 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 19:23:36.091511  740558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 19:23:36.111682  740558 ssh_runner.go:195] Run: openssl version
	I0920 19:23:36.117460  740558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 19:23:36.127444  740558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:23:36.131835  740558 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:23 /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:23:36.131907  740558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 19:23:36.138955  740558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 19:23:36.148590  740558 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 19:23:36.152166  740558 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 19:23:36.152217  740558 kubeadm.go:392] StartCluster: {Name:addons-388835 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-388835 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:23:36.152296  740558 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 19:23:36.152373  740558 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 19:23:36.193594  740558 cri.go:89] found id: ""
	I0920 19:23:36.193667  740558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 19:23:36.202390  740558 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 19:23:36.211368  740558 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 19:23:36.211433  740558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 19:23:36.220280  740558 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 19:23:36.220343  740558 kubeadm.go:157] found existing configuration files:
	
	I0920 19:23:36.220570  740558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 19:23:36.232939  740558 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 19:23:36.233031  740558 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 19:23:36.241496  740558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 19:23:36.250612  740558 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 19:23:36.250684  740558 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 19:23:36.259403  740558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 19:23:36.269064  740558 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 19:23:36.269146  740558 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 19:23:36.277881  740558 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 19:23:36.286558  740558 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 19:23:36.286628  740558 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 19:23:36.295168  740558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 19:23:36.336063  740558 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 19:23:36.336132  740558 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 19:23:36.367383  740558 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 19:23:36.367462  740558 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 19:23:36.367501  740558 kubeadm.go:310] OS: Linux
	I0920 19:23:36.367552  740558 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 19:23:36.367603  740558 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 19:23:36.367654  740558 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 19:23:36.367705  740558 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 19:23:36.367755  740558 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 19:23:36.367807  740558 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 19:23:36.367864  740558 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 19:23:36.367917  740558 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 19:23:36.367967  740558 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 19:23:36.449009  740558 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 19:23:36.449123  740558 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 19:23:36.449219  740558 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 19:23:36.454467  740558 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 19:23:36.457420  740558 out.go:235]   - Generating certificates and keys ...
	I0920 19:23:36.457529  740558 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0920 19:23:36.457608  740558 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0920 19:23:37.774921  740558 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0920 19:23:38.572752  740558 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0920 19:23:38.891226  740558 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0920 19:23:39.420348  740558 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0920 19:23:39.771149  740558 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0920 19:23:39.771308  740558 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-388835 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:23:40.134247  740558 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0920 19:23:40.134833  740558 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-388835 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0920 19:23:40.374021  740558 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0920 19:23:40.624673  740558 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0920 19:23:41.673065  740558 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0920 19:23:41.673352  740558 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0920 19:23:42.595168  740558 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0920 19:23:42.814271  740558 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0920 19:23:43.382426  740558 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0920 19:23:43.671488  740558 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0920 19:23:44.943545  740558 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0920 19:23:44.944242  740558 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0920 19:23:44.947203  740558 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0920 19:23:44.950010  740558 out.go:235]   - Booting up control plane ...
	I0920 19:23:44.950125  740558 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0920 19:23:44.950205  740558 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0920 19:23:44.950276  740558 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0920 19:23:44.961147  740558 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0920 19:23:44.968001  740558 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0920 19:23:44.968216  740558 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0920 19:23:45.151602  740558 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0920 19:23:45.152581  740558 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0920 19:23:47.664194  740558 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.511655441s
	I0920 19:23:47.664285  740558 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0920 19:23:54.666445  740558 kubeadm.go:310] [api-check] The API server is healthy after 7.002246154s
	I0920 19:23:54.690946  740558 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0920 19:23:54.703964  740558 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0920 19:23:54.731491  740558 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0920 19:23:54.731706  740558 kubeadm.go:310] [mark-control-plane] Marking the node addons-388835 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0920 19:23:54.744289  740558 kubeadm.go:310] [bootstrap-token] Using token: 52wqa9.z738ke7mmaxfj21y
	I0920 19:23:54.746491  740558 out.go:235]   - Configuring RBAC rules ...
	I0920 19:23:54.746651  740558 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0920 19:23:54.751867  740558 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0920 19:23:54.763107  740558 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0920 19:23:54.769056  740558 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0920 19:23:54.773143  740558 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0920 19:23:54.777419  740558 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0920 19:23:55.087373  740558 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0920 19:23:55.502419  740558 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0920 19:23:56.075109  740558 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0920 19:23:56.076617  740558 kubeadm.go:310] 
	I0920 19:23:56.076694  740558 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0920 19:23:56.076700  740558 kubeadm.go:310] 
	I0920 19:23:56.076777  740558 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0920 19:23:56.076782  740558 kubeadm.go:310] 
	I0920 19:23:56.076807  740558 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0920 19:23:56.077277  740558 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0920 19:23:56.077344  740558 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0920 19:23:56.077350  740558 kubeadm.go:310] 
	I0920 19:23:56.077403  740558 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0920 19:23:56.077407  740558 kubeadm.go:310] 
	I0920 19:23:56.077454  740558 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0920 19:23:56.077459  740558 kubeadm.go:310] 
	I0920 19:23:56.077510  740558 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0920 19:23:56.077584  740558 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0920 19:23:56.077651  740558 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0920 19:23:56.077655  740558 kubeadm.go:310] 
	I0920 19:23:56.077960  740558 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0920 19:23:56.078053  740558 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0920 19:23:56.078059  740558 kubeadm.go:310] 
	I0920 19:23:56.078361  740558 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 52wqa9.z738ke7mmaxfj21y \
	I0920 19:23:56.078470  740558 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8b304dfc288e6e4b82506869632c46f85018683fd47b95b6c7b299c4720c7503 \
	I0920 19:23:56.078680  740558 kubeadm.go:310] 	--control-plane 
	I0920 19:23:56.078691  740558 kubeadm.go:310] 
	I0920 19:23:56.078956  740558 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0920 19:23:56.078985  740558 kubeadm.go:310] 
	I0920 19:23:56.079512  740558 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 52wqa9.z738ke7mmaxfj21y \
	I0920 19:23:56.079838  740558 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:8b304dfc288e6e4b82506869632c46f85018683fd47b95b6c7b299c4720c7503 
	I0920 19:23:56.084749  740558 kubeadm.go:310] W0920 19:23:36.332572    1020 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:23:56.085050  740558 kubeadm.go:310] W0920 19:23:36.333595    1020 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0920 19:23:56.085269  740558 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0920 19:23:56.085379  740558 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0920 19:23:56.085403  740558 cni.go:84] Creating CNI manager for ""
	I0920 19:23:56.085416  740558 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:23:56.087710  740558 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0920 19:23:56.089718  740558 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0920 19:23:56.093729  740558 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0920 19:23:56.093753  740558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0920 19:23:56.113088  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0920 19:23:56.405177  740558 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0920 19:23:56.405304  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:56.405302  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-388835 minikube.k8s.io/updated_at=2024_09_20T19_23_56_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a minikube.k8s.io/name=addons-388835 minikube.k8s.io/primary=true
	I0920 19:23:56.418910  740558 ops.go:34] apiserver oom_adj: -16
	I0920 19:23:56.550939  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:57.051584  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:57.552067  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:58.051654  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:58.551540  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:59.051747  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:23:59.551299  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:24:00.064305  740558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0920 19:24:00.336167  740558 kubeadm.go:1113] duration metric: took 3.93088752s to wait for elevateKubeSystemPrivileges
	I0920 19:24:00.336199  740558 kubeadm.go:394] duration metric: took 24.183988016s to StartCluster
	I0920 19:24:00.336221  740558 settings.go:142] acquiring lock: {Name:mk0c46dfbbc36539bac54a4b44b23e5293c710e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:24:00.336358  740558 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:24:00.336795  740558 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/kubeconfig: {Name:mk2c4e41774b0706b15fe3f774308577d8981408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:24:00.344635  740558 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 19:24:00.345035  740558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0920 19:24:00.345378  740558 config.go:182] Loaded profile config "addons-388835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:24:00.345421  740558 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0920 19:24:00.345513  740558 addons.go:69] Setting yakd=true in profile "addons-388835"
	I0920 19:24:00.345529  740558 addons.go:234] Setting addon yakd=true in "addons-388835"
	I0920 19:24:00.345560  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.346067  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.347838  740558 addons.go:69] Setting cloud-spanner=true in profile "addons-388835"
	I0920 19:24:00.347866  740558 addons.go:234] Setting addon cloud-spanner=true in "addons-388835"
	I0920 19:24:00.347910  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.348427  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.348799  740558 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-388835"
	I0920 19:24:00.348820  740558 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-388835"
	I0920 19:24:00.348851  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.350045  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.355523  740558 addons.go:69] Setting registry=true in profile "addons-388835"
	I0920 19:24:00.355708  740558 addons.go:234] Setting addon registry=true in "addons-388835"
	I0920 19:24:00.355789  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.357190  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.367994  740558 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-388835"
	I0920 19:24:00.368087  740558 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-388835"
	I0920 19:24:00.368125  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.368703  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.369498  740558 addons.go:69] Setting storage-provisioner=true in profile "addons-388835"
	I0920 19:24:00.369576  740558 addons.go:234] Setting addon storage-provisioner=true in "addons-388835"
	I0920 19:24:00.369650  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.370387  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.414551  740558 addons.go:69] Setting default-storageclass=true in profile "addons-388835"
	I0920 19:24:00.414591  740558 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-388835"
	I0920 19:24:00.415015  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.416565  740558 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-388835"
	I0920 19:24:00.416603  740558 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-388835"
	I0920 19:24:00.416970  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.434086  740558 addons.go:69] Setting gcp-auth=true in profile "addons-388835"
	I0920 19:24:00.434136  740558 mustload.go:65] Loading cluster: addons-388835
	I0920 19:24:00.434401  740558 config.go:182] Loaded profile config "addons-388835": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:24:00.434695  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.519299  740558 addons.go:69] Setting volcano=true in profile "addons-388835"
	I0920 19:24:00.519339  740558 addons.go:234] Setting addon volcano=true in "addons-388835"
	I0920 19:24:00.519382  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.519929  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.536380  740558 addons.go:69] Setting ingress=true in profile "addons-388835"
	I0920 19:24:00.536421  740558 addons.go:234] Setting addon ingress=true in "addons-388835"
	I0920 19:24:00.536478  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.537059  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.545198  740558 addons.go:69] Setting volumesnapshots=true in profile "addons-388835"
	I0920 19:24:00.545265  740558 addons.go:234] Setting addon volumesnapshots=true in "addons-388835"
	I0920 19:24:00.545323  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.545891  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.582835  740558 addons.go:69] Setting ingress-dns=true in profile "addons-388835"
	I0920 19:24:00.582870  740558 addons.go:234] Setting addon ingress-dns=true in "addons-388835"
	I0920 19:24:00.582921  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.583552  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.584898  740558 out.go:177] * Verifying Kubernetes components...
	I0920 19:24:00.608129  740558 addons.go:69] Setting inspektor-gadget=true in profile "addons-388835"
	I0920 19:24:00.608170  740558 addons.go:234] Setting addon inspektor-gadget=true in "addons-388835"
	I0920 19:24:00.608213  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.608747  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.624106  740558 addons.go:69] Setting metrics-server=true in profile "addons-388835"
	I0920 19:24:00.624142  740558 addons.go:234] Setting addon metrics-server=true in "addons-388835"
	I0920 19:24:00.624182  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.624942  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.629690  740558 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 19:24:00.660609  740558 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0920 19:24:00.662917  740558 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0920 19:24:00.663004  740558 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0920 19:24:00.663141  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.685361  740558 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0920 19:24:00.687920  740558 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0920 19:24:00.687990  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0920 19:24:00.688111  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.708415  740558 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0920 19:24:00.710842  740558 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:24:00.710876  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0920 19:24:00.710961  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.758840  740558 addons.go:234] Setting addon default-storageclass=true in "addons-388835"
	I0920 19:24:00.758885  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.759321  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.791227  740558 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 19:24:00.793406  740558 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:24:00.793491  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 19:24:00.793593  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.805076  740558 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0920 19:24:00.810959  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0920 19:24:00.816902  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0920 19:24:00.818468  740558 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0920 19:24:00.818722  740558 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0920 19:24:00.830004  740558 out.go:177]   - Using image docker.io/registry:2.8.3
	I0920 19:24:00.833852  740558 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0920 19:24:00.835559  740558 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0920 19:24:00.835778  740558 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0920 19:24:00.835810  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0920 19:24:00.835910  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.836199  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0920 19:24:00.841564  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0920 19:24:00.845989  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0920 19:24:00.851848  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0920 19:24:00.854579  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0920 19:24:00.856057  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0920 19:24:00.856074  740558 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0920 19:24:00.856143  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.857044  740558 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:24:00.857104  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0920 19:24:00.857192  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.874865  740558 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0920 19:24:00.877400  740558 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0920 19:24:00.879533  740558 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 19:24:00.879605  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0920 19:24:00.879706  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.880256  740558 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0920 19:24:00.880273  740558 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0920 19:24:00.880322  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.904090  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0920 19:24:00.904366  740558 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:24:00.906891  740558 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0920 19:24:00.909155  740558 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:24:00.910761  740558 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0920 19:24:00.911721  740558 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:24:00.911774  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0920 19:24:00.911881  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.927483  740558 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-388835"
	I0920 19:24:00.927526  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.927975  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:00.928210  740558 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 19:24:00.928225  740558 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 19:24:00.928268  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.945370  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:00.946184  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:00.946407  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:00.947951  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0920 19:24:00.947967  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0920 19:24:00.948020  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:00.957821  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:00.998708  740558 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 19:24:00.998731  740558 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 19:24:00.998814  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:01.010952  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.094453  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.098545  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.120848  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.121533  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.124080  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.134407  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.142094  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.165083  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.175778  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.179765  740558 out.go:177]   - Using image docker.io/busybox:stable
	I0920 19:24:01.181825  740558 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0920 19:24:01.183672  740558 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:24:01.183699  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0920 19:24:01.183765  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:01.211951  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:01.256009  740558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0920 19:24:01.256147  740558 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 19:24:01.880479  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0920 19:24:01.955458  740558 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0920 19:24:01.955541  740558 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0920 19:24:01.995952  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0920 19:24:02.026004  740558 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0920 19:24:02.026080  740558 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0920 19:24:02.029551  740558 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 19:24:02.029627  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0920 19:24:02.043563  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0920 19:24:02.043651  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0920 19:24:02.057805  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0920 19:24:02.073421  740558 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0920 19:24:02.073498  740558 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0920 19:24:02.106797  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 19:24:02.111087  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0920 19:24:02.128542  740558 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0920 19:24:02.128626  740558 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0920 19:24:02.147536  740558 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0920 19:24:02.147618  740558 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0920 19:24:02.153236  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0920 19:24:02.195515  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0920 19:24:02.283164  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 19:24:02.311919  740558 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0920 19:24:02.311996  740558 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0920 19:24:02.361100  740558 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0920 19:24:02.361177  740558 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0920 19:24:02.404528  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0920 19:24:02.404608  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0920 19:24:02.405961  740558 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 19:24:02.406029  740558 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 19:24:02.435896  740558 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:24:02.435971  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0920 19:24:02.478633  740558 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0920 19:24:02.478709  740558 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0920 19:24:02.510982  740558 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0920 19:24:02.511060  740558 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0920 19:24:02.516065  740558 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0920 19:24:02.516144  740558 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0920 19:24:02.709101  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0920 19:24:02.709182  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0920 19:24:02.755247  740558 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0920 19:24:02.755328  740558 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0920 19:24:02.780271  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0920 19:24:02.795123  740558 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:24:02.795201  740558 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 19:24:02.797003  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0920 19:24:02.797078  740558 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0920 19:24:02.801430  740558 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:24:02.801503  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0920 19:24:02.927100  740558 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0920 19:24:02.927181  740558 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0920 19:24:02.975420  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0920 19:24:03.026249  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0920 19:24:03.026338  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0920 19:24:03.072335  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 19:24:03.120807  740558 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:24:03.120883  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0920 19:24:03.148461  740558 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.892281123s)
	I0920 19:24:03.149472  740558 node_ready.go:35] waiting up to 6m0s for node "addons-388835" to be "Ready" ...
	I0920 19:24:03.149759  740558 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.893714792s)
	I0920 19:24:03.149827  740558 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0920 19:24:03.157153  740558 node_ready.go:49] node "addons-388835" has status "Ready":"True"
	I0920 19:24:03.157226  740558 node_ready.go:38] duration metric: took 7.683828ms for node "addons-388835" to be "Ready" ...
	I0920 19:24:03.157251  740558 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:24:03.184424  740558 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gc4xn" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:03.320996  740558 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0920 19:24:03.321078  740558 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0920 19:24:03.436491  740558 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0920 19:24:03.436520  740558 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0920 19:24:03.486407  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:24:03.619859  740558 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:24:03.619886  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0920 19:24:03.653965  740558 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-388835" context rescaled to 1 replicas
	I0920 19:24:03.744031  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0920 19:24:03.744111  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0920 19:24:03.903942  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0920 19:24:04.030910  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0920 19:24:04.030993  740558 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0920 19:24:04.269648  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.389127985s)
	I0920 19:24:04.344994  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.348936108s)
	I0920 19:24:04.688119  740558 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-gc4xn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gc4xn" not found
	I0920 19:24:04.688194  740558 pod_ready.go:82] duration metric: took 1.503684567s for pod "coredns-7c65d6cfc9-gc4xn" in "kube-system" namespace to be "Ready" ...
	E0920 19:24:04.688221  740558 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-gc4xn" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-gc4xn" not found
	I0920 19:24:04.688262  740558 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:04.708088  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0920 19:24:04.708386  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0920 19:24:05.255398  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0920 19:24:05.255473  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0920 19:24:05.735224  740558 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:24:05.735305  740558 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0920 19:24:05.901293  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0920 19:24:06.698761  740558 pod_ready.go:103] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"False"
	I0920 19:24:08.177793  740558 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0920 19:24:08.177944  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:08.213934  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:08.516882  740558 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0920 19:24:08.569723  740558 addons.go:234] Setting addon gcp-auth=true in "addons-388835"
	I0920 19:24:08.569775  740558 host.go:66] Checking if "addons-388835" exists ...
	I0920 19:24:08.570252  740558 cli_runner.go:164] Run: docker container inspect addons-388835 --format={{.State.Status}}
	I0920 19:24:08.597148  740558 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0920 19:24:08.597199  740558 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-388835
	I0920 19:24:08.622800  740558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/addons-388835/id_rsa Username:docker}
	I0920 19:24:09.241763  740558 pod_ready.go:103] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"False"
	I0920 19:24:11.007844  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.949941509s)
	I0920 19:24:11.007960  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.901090562s)
	I0920 19:24:11.008026  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.896858692s)
	I0920 19:24:11.008189  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.854876126s)
	I0920 19:24:11.008267  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.812674484s)
	I0920 19:24:11.008288  740558 addons.go:475] Verifying addon ingress=true in "addons-388835"
	I0920 19:24:11.008495  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.725264696s)
	I0920 19:24:11.008902  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.228555073s)
	I0920 19:24:11.008932  740558 addons.go:475] Verifying addon registry=true in "addons-388835"
	I0920 19:24:11.009052  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.033535636s)
	I0920 19:24:11.009280  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.936849106s)
	I0920 19:24:11.009645  740558 addons.go:475] Verifying addon metrics-server=true in "addons-388835"
	I0920 19:24:11.009372  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.522932305s)
	W0920 19:24:11.009677  740558 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:24:11.009712  740558 retry.go:31] will retry after 282.775658ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0920 19:24:11.009437  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.10541733s)
	I0920 19:24:11.011716  740558 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-388835 service yakd-dashboard -n yakd-dashboard
	
	I0920 19:24:11.011745  740558 out.go:177] * Verifying registry addon...
	I0920 19:24:11.011728  740558 out.go:177] * Verifying ingress addon...
	I0920 19:24:11.015944  740558 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0920 19:24:11.016041  740558 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0920 19:24:11.037384  740558 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0920 19:24:11.037406  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:11.038583  740558 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0920 19:24:11.038619  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0920 19:24:11.076603  740558 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0920 19:24:11.293395  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0920 19:24:11.524340  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:11.528137  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:11.708642  740558 pod_ready.go:103] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"False"
	I0920 19:24:11.957492  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.056103089s)
	I0920 19:24:11.957529  740558 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-388835"
	I0920 19:24:11.957612  740558 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.360445472s)
	I0920 19:24:11.959593  740558 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0920 19:24:11.959703  740558 out.go:177] * Verifying csi-hostpath-driver addon...
	I0920 19:24:11.961854  740558 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0920 19:24:11.962862  740558 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0920 19:24:11.964304  740558 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0920 19:24:11.964363  740558 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0920 19:24:11.988134  740558 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0920 19:24:11.988175  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:12.037237  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:12.087666  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:12.130003  740558 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0920 19:24:12.130045  740558 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0920 19:24:12.235329  740558 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:24:12.235407  740558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0920 19:24:12.300976  740558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0920 19:24:12.470620  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:12.522086  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:12.523212  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:12.968614  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:13.023216  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:13.023869  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:13.228803  740558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.935346838s)
	I0920 19:24:13.231708  740558 addons.go:475] Verifying addon gcp-auth=true in "addons-388835"
	I0920 19:24:13.236603  740558 out.go:177] * Verifying gcp-auth addon...
	I0920 19:24:13.240000  740558 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0920 19:24:13.242994  740558 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:24:13.467864  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:13.523424  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:13.523597  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:13.968908  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:14.022817  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:14.024585  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:14.195210  740558 pod_ready.go:103] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"False"
	I0920 19:24:14.469418  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:14.522178  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:14.523787  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:14.968824  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:15.023670  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:15.024162  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:15.468465  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:15.520797  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:15.523082  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:15.969247  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:16.025196  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:16.025470  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:16.201974  740558 pod_ready.go:103] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"False"
	I0920 19:24:16.467626  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:16.522116  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:16.524203  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:16.968817  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:17.069550  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:17.070071  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:17.194976  740558 pod_ready.go:93] pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.195049  740558 pod_ready.go:82] duration metric: took 12.50676504s for pod "coredns-7c65d6cfc9-jvb97" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.195077  740558 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.200873  740558 pod_ready.go:93] pod "etcd-addons-388835" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.200954  740558 pod_ready.go:82] duration metric: took 5.843253ms for pod "etcd-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.200993  740558 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.206593  740558 pod_ready.go:93] pod "kube-apiserver-addons-388835" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.206669  740558 pod_ready.go:82] duration metric: took 5.631463ms for pod "kube-apiserver-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.206695  740558 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.213582  740558 pod_ready.go:93] pod "kube-controller-manager-addons-388835" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.213660  740558 pod_ready.go:82] duration metric: took 6.943253ms for pod "kube-controller-manager-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.213695  740558 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9r82v" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.219427  740558 pod_ready.go:93] pod "kube-proxy-9r82v" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.219506  740558 pod_ready.go:82] duration metric: took 5.788122ms for pod "kube-proxy-9r82v" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.219543  740558 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.468900  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:17.523801  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:17.524937  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:17.594346  740558 pod_ready.go:93] pod "kube-scheduler-addons-388835" in "kube-system" namespace has status "Ready":"True"
	I0920 19:24:17.594421  740558 pod_ready.go:82] duration metric: took 374.842273ms for pod "kube-scheduler-addons-388835" in "kube-system" namespace to be "Ready" ...
	I0920 19:24:17.594446  740558 pod_ready.go:39] duration metric: took 14.43716744s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 19:24:17.594476  740558 api_server.go:52] waiting for apiserver process to appear ...
	I0920 19:24:17.594582  740558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:24:17.611448  740558 api_server.go:72] duration metric: took 17.266692439s to wait for apiserver process to appear ...
	I0920 19:24:17.611525  740558 api_server.go:88] waiting for apiserver healthz status ...
	I0920 19:24:17.611565  740558 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0920 19:24:17.619978  740558 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0920 19:24:17.621258  740558 api_server.go:141] control plane version: v1.31.1
	I0920 19:24:17.621331  740558 api_server.go:131] duration metric: took 9.78252ms to wait for apiserver health ...
	I0920 19:24:17.621356  740558 system_pods.go:43] waiting for kube-system pods to appear ...
	I0920 19:24:17.801558  740558 system_pods.go:59] 18 kube-system pods found
	I0920 19:24:17.801638  740558 system_pods.go:61] "coredns-7c65d6cfc9-jvb97" [1aed1973-f82b-40c8-9a52-ffd69196ddb3] Running
	I0920 19:24:17.801671  740558 system_pods.go:61] "csi-hostpath-attacher-0" [3b5a69b0-ef2a-4e59-ae0f-df75c983227f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 19:24:17.801695  740558 system_pods.go:61] "csi-hostpath-resizer-0" [cf42fb9d-12c5-4500-8d8c-447f9f192175] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 19:24:17.801728  740558 system_pods.go:61] "csi-hostpathplugin-x78bc" [66f7a68e-78a5-456c-9958-bda260389f90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 19:24:17.801752  740558 system_pods.go:61] "etcd-addons-388835" [21cd7c8d-6a8f-4f1f-aa01-45c1b3d2d04d] Running
	I0920 19:24:17.801777  740558 system_pods.go:61] "kindnet-gffmg" [01a1548e-ada9-49d4-aead-0fb27af5b884] Running
	I0920 19:24:17.801797  740558 system_pods.go:61] "kube-apiserver-addons-388835" [8a501fc6-4064-4273-b7c3-ab4aacca5161] Running
	I0920 19:24:17.801828  740558 system_pods.go:61] "kube-controller-manager-addons-388835" [0c5c526c-a15a-4deb-a765-016282b49d92] Running
	I0920 19:24:17.801851  740558 system_pods.go:61] "kube-ingress-dns-minikube" [7dbf4aec-be85-43e5-813d-75b1b3660834] Running
	I0920 19:24:17.801874  740558 system_pods.go:61] "kube-proxy-9r82v" [1e88a6c5-e432-44c8-b40b-58b0d863e34f] Running
	I0920 19:24:17.801896  740558 system_pods.go:61] "kube-scheduler-addons-388835" [34e5fda7-13d9-4699-b6fc-c9dbbd077725] Running
	I0920 19:24:17.801930  740558 system_pods.go:61] "metrics-server-84c5f94fbc-qpwkg" [6d2ee6cf-4892-473c-a648-0405b31eddf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:24:17.801955  740558 system_pods.go:61] "nvidia-device-plugin-daemonset-pst9m" [a63a2067-c0d8-4755-bc26-33ef9f8e8c7d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 19:24:17.801982  740558 system_pods.go:61] "registry-66c9cd494c-mr26g" [b31af5af-2dd0-483b-bb89-7be808c67c81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 19:24:17.802012  740558 system_pods.go:61] "registry-proxy-vt26f" [c4bffcb8-e224-4e7d-9149-e0a9c22d46f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 19:24:17.802044  740558 system_pods.go:61] "snapshot-controller-56fcc65765-r6bk5" [faf94dc5-50bc-4ebc-b4de-352fca1feb31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:24:17.802069  740558 system_pods.go:61] "snapshot-controller-56fcc65765-rtlld" [11061a27-925b-454c-9303-c1bcf4149d7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:24:17.802094  740558 system_pods.go:61] "storage-provisioner" [3e689755-1d20-4585-84bd-8beb171718d3] Running
	I0920 19:24:17.802122  740558 system_pods.go:74] duration metric: took 180.742469ms to wait for pod list to return data ...
	I0920 19:24:17.802152  740558 default_sa.go:34] waiting for default service account to be created ...
	I0920 19:24:17.969170  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:17.992491  740558 default_sa.go:45] found service account: "default"
	I0920 19:24:17.992570  740558 default_sa.go:55] duration metric: took 190.395489ms for default service account to be created ...
	I0920 19:24:17.992605  740558 system_pods.go:116] waiting for k8s-apps to be running ...
	I0920 19:24:18.023219  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:18.023462  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:18.214793  740558 system_pods.go:86] 18 kube-system pods found
	I0920 19:24:18.214894  740558 system_pods.go:89] "coredns-7c65d6cfc9-jvb97" [1aed1973-f82b-40c8-9a52-ffd69196ddb3] Running
	I0920 19:24:18.214923  740558 system_pods.go:89] "csi-hostpath-attacher-0" [3b5a69b0-ef2a-4e59-ae0f-df75c983227f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0920 19:24:18.214951  740558 system_pods.go:89] "csi-hostpath-resizer-0" [cf42fb9d-12c5-4500-8d8c-447f9f192175] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0920 19:24:18.214981  740558 system_pods.go:89] "csi-hostpathplugin-x78bc" [66f7a68e-78a5-456c-9958-bda260389f90] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0920 19:24:18.215007  740558 system_pods.go:89] "etcd-addons-388835" [21cd7c8d-6a8f-4f1f-aa01-45c1b3d2d04d] Running
	I0920 19:24:18.215033  740558 system_pods.go:89] "kindnet-gffmg" [01a1548e-ada9-49d4-aead-0fb27af5b884] Running
	I0920 19:24:18.215057  740558 system_pods.go:89] "kube-apiserver-addons-388835" [8a501fc6-4064-4273-b7c3-ab4aacca5161] Running
	I0920 19:24:18.215083  740558 system_pods.go:89] "kube-controller-manager-addons-388835" [0c5c526c-a15a-4deb-a765-016282b49d92] Running
	I0920 19:24:18.215110  740558 system_pods.go:89] "kube-ingress-dns-minikube" [7dbf4aec-be85-43e5-813d-75b1b3660834] Running
	I0920 19:24:18.215134  740558 system_pods.go:89] "kube-proxy-9r82v" [1e88a6c5-e432-44c8-b40b-58b0d863e34f] Running
	I0920 19:24:18.215159  740558 system_pods.go:89] "kube-scheduler-addons-388835" [34e5fda7-13d9-4699-b6fc-c9dbbd077725] Running
	I0920 19:24:18.215186  740558 system_pods.go:89] "metrics-server-84c5f94fbc-qpwkg" [6d2ee6cf-4892-473c-a648-0405b31eddf6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0920 19:24:18.215216  740558 system_pods.go:89] "nvidia-device-plugin-daemonset-pst9m" [a63a2067-c0d8-4755-bc26-33ef9f8e8c7d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0920 19:24:18.215247  740558 system_pods.go:89] "registry-66c9cd494c-mr26g" [b31af5af-2dd0-483b-bb89-7be808c67c81] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0920 19:24:18.215273  740558 system_pods.go:89] "registry-proxy-vt26f" [c4bffcb8-e224-4e7d-9149-e0a9c22d46f4] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0920 19:24:18.215299  740558 system_pods.go:89] "snapshot-controller-56fcc65765-r6bk5" [faf94dc5-50bc-4ebc-b4de-352fca1feb31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:24:18.215330  740558 system_pods.go:89] "snapshot-controller-56fcc65765-rtlld" [11061a27-925b-454c-9303-c1bcf4149d7b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0920 19:24:18.215356  740558 system_pods.go:89] "storage-provisioner" [3e689755-1d20-4585-84bd-8beb171718d3] Running
	I0920 19:24:18.215384  740558 system_pods.go:126] duration metric: took 222.756896ms to wait for k8s-apps to be running ...
	I0920 19:24:18.215408  740558 system_svc.go:44] waiting for kubelet service to be running ....
	I0920 19:24:18.215489  740558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:24:18.247666  740558 system_svc.go:56] duration metric: took 32.231792ms WaitForService to wait for kubelet
	I0920 19:24:18.247696  740558 kubeadm.go:582] duration metric: took 17.902946244s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 19:24:18.247716  740558 node_conditions.go:102] verifying NodePressure condition ...
	I0920 19:24:18.393647  740558 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0920 19:24:18.393683  740558 node_conditions.go:123] node cpu capacity is 2
	I0920 19:24:18.393697  740558 node_conditions.go:105] duration metric: took 145.975343ms to run NodePressure ...
	I0920 19:24:18.393729  740558 start.go:241] waiting for startup goroutines ...
	I0920 19:24:18.468166  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:18.522441  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:18.523144  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:18.968320  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:19.023592  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:19.024581  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:19.469125  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:19.569525  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:19.570771  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:19.974093  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:20.024725  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:20.025806  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:20.468628  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:20.523246  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:20.524404  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:20.968886  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:21.023185  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:21.025221  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:21.469370  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:21.523697  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:21.524618  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:21.969695  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:22.069440  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:22.070802  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:22.467844  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:22.520875  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:22.521772  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:22.967806  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:23.023113  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:23.024686  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:23.467784  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:23.525025  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:23.525671  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:23.967960  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:24.069243  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:24.069697  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:24.470425  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:24.520666  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:24.523755  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:24.969014  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:25.070029  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:25.071651  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:25.468358  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:25.521477  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:25.522499  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:25.970687  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:26.021617  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:26.022810  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:26.471697  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:26.523038  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:26.523577  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:26.969166  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:27.021627  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:27.023965  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:27.468934  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:27.521169  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:27.521777  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:27.973434  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:28.021437  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:28.023351  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:28.467951  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:28.521780  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:28.522894  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:28.968547  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:29.022321  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:29.023737  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:29.467948  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:29.520906  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:29.521849  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:29.968315  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:30.046849  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:30.048316  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:30.471328  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:30.531613  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:30.532231  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:30.969053  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:31.022421  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:31.023324  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:31.471949  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:31.522834  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:31.524049  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:31.968385  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:32.027479  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:32.028801  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:32.468893  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:32.523682  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:32.523943  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:32.968177  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:33.021793  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:33.023308  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:33.468148  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:33.521396  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:33.522570  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:33.968300  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:34.023157  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:34.024143  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:34.467879  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:34.521308  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:34.523583  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:34.968233  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:35.029195  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:35.030023  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:35.470698  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:35.524513  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:35.525710  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:35.967622  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:36.024303  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0920 19:24:36.028430  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:36.470113  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:36.521616  740558 kapi.go:107] duration metric: took 25.505567061s to wait for kubernetes.io/minikube-addons=registry ...
	I0920 19:24:36.522532  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:36.971950  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:37.023399  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:37.468424  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:37.522197  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:37.967885  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:38.021826  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:38.467674  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:38.520746  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:38.967758  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:39.023057  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:39.467851  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:39.521417  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:39.969154  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:40.021163  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:40.468036  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:40.522044  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:40.968723  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:41.021692  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:41.470677  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:41.522066  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:41.968467  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:42.021892  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:42.469130  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:42.522670  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:42.968020  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:43.020268  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:43.471060  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:43.520520  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:43.968696  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:44.023526  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:44.468198  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:44.520417  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:44.975030  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:45.027942  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:45.468535  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:45.522800  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:45.968524  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:46.022327  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:46.468834  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:46.522889  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:46.968736  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:47.021233  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:47.468958  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:47.526109  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:47.968868  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:48.020840  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:48.475646  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:48.575134  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:48.968750  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:49.025121  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:49.468218  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:49.521240  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:49.968685  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:50.022693  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:50.468012  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:50.520812  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:50.967659  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:51.021346  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:51.471015  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:51.521145  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:51.968266  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:52.021339  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:52.467980  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:52.520169  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:52.967762  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:53.021248  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:53.468997  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:53.520887  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:53.967858  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:54.022478  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:54.468602  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:54.521129  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:54.967616  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:55.021142  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:55.468793  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:55.522404  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:55.969367  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:56.020628  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:56.468253  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:56.521176  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:56.968885  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:57.021162  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:57.469723  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:57.528057  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:57.973568  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:58.073315  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:58.468036  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:58.522646  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:58.971004  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:59.020595  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:59.469595  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:24:59.569035  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:24:59.969147  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:25:00.058662  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:00.475667  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:25:00.528673  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:00.967537  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:25:01.019911  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:01.471407  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:25:01.520133  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:01.968360  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0920 19:25:02.021197  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:02.469064  740558 kapi.go:107] duration metric: took 50.506198335s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0920 19:25:02.521321  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:03.020964  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:03.521335  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:04.020993  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:04.520504  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:05.021146  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:05.520763  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:06.021251  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:06.520243  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:07.021089  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:07.520132  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:08.020647  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:08.520736  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:09.020558  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:09.521527  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:10.028999  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:10.520151  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:11.021172  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:11.520106  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:12.021073  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:12.520775  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:13.021192  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:13.520505  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:14.021306  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:14.521286  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:15.024680  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:15.521101  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:16.022510  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:16.521560  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:17.021521  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:17.520661  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:18.021153  740558 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0920 19:25:18.528367  740558 kapi.go:107] duration metric: took 1m7.512373137s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0920 19:25:36.243975  740558 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0920 19:25:36.244052  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:36.744698  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:37.243986  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:37.744061  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:38.244393  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:38.744947  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:39.243863  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:39.744287  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:40.244732  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:40.743651  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:41.243660  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:41.743859  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:42.246715  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:42.744087  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:43.243747  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:43.743838  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:44.244231  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:44.744155  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:45.248097  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:45.744277  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:46.244528  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:46.743463  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:47.244905  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:47.743999  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:48.244202  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:48.745305  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:49.244504  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:49.744596  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:50.244164  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:50.743309  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:51.243152  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:51.743805  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:52.243497  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:52.744137  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:53.244081  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:53.744097  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:54.244155  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:54.744953  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:55.243974  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:55.744272  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:56.244487  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:56.743783  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:57.243813  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:57.744027  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:58.244852  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:58.743369  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:59.244117  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:25:59.744407  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:00.307788  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:00.743827  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:01.244226  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:01.744828  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:02.244423  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:02.744426  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:03.243312  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:03.743053  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:04.244194  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:04.744587  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:05.244391  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:05.744816  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:06.245469  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:06.743602  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:07.243937  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:07.744543  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:08.243847  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:08.743668  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:09.243282  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:09.743927  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:10.244961  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:10.743821  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:11.244234  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:11.744057  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:12.244209  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:12.744456  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:13.243881  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:13.743882  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:14.243977  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:14.745034  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:15.243731  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:15.743929  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:16.244443  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:16.745082  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:17.244290  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:17.744076  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:18.244348  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:18.743342  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:19.244105  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:19.744562  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:20.243867  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:20.744107  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:21.243630  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:21.743471  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:22.244159  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:22.743947  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:23.243358  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:23.743346  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:24.244189  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:24.744226  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:25.244107  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:25.743458  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:26.244479  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:26.743612  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:27.244027  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:27.744077  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:28.244084  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:28.744520  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:29.243805  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:29.744097  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:30.244636  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:30.743592  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:31.243835  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:31.744163  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:32.244502  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:32.743760  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:33.243702  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:33.743825  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:34.244329  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:34.744131  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:35.244682  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:35.744185  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.243853  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:36.744229  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.244189  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:37.743795  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.245159  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:38.744418  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.244261  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:39.744666  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.243472  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:40.744189  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.244033  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:41.744368  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.250368  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:42.744011  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.249669  740558 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0920 19:26:43.743994  740558 kapi.go:107] duration metric: took 2m30.503991666s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0920 19:26:43.746990  740558 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-388835 cluster.
	I0920 19:26:43.750397  740558 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0920 19:26:43.752641  740558 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0920 19:26:43.755326  740558 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, volcano, storage-provisioner, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0920 19:26:43.757681  740558 addons.go:510] duration metric: took 2m43.412260843s for enable addons: enabled=[cloud-spanner ingress-dns volcano storage-provisioner nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0920 19:26:43.757731  740558 start.go:246] waiting for cluster config update ...
	I0920 19:26:43.757753  740558 start.go:255] writing updated cluster config ...
	I0920 19:26:43.758046  740558 ssh_runner.go:195] Run: rm -f paused
	I0920 19:26:44.134490  740558 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0920 19:26:44.136863  740558 out.go:177] * Done! kubectl is now configured to use "addons-388835" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	e6d902f127e86       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   73de0502464cc       gadget-4njwd
	7ab95d06facff       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   8d3a674379901       gcp-auth-89d5ffd79-wwptb
	b86ec76c5cb6d       8b46b1cd48760       4 minutes ago       Running             admission                                0                   50171dac9459e       volcano-admission-77d7d48b68-p5jsb
	95875e3ffb1d7       289a818c8d9c5       4 minutes ago       Running             controller                               0                   706b3431dd4d2       ingress-nginx-controller-bc57996ff-bfc7b
	ea12edb131952       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	0038c5d29c348       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	3d4b598a842b7       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	6053b9034c082       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	50ec6873972dd       420193b27261a       5 minutes ago       Exited              patch                                    2                   b8cba6d21daa2       ingress-nginx-admission-patch-b8b2z
	6576a7d03c15a       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	bbf6f768b5c54       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   69ab0172602de       csi-hostpath-resizer-0
	97621c8b83cb6       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   5f02ed2f15e5e       volcano-scheduler-576bc46687-g72p9
	6b6890cd73b2a       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   51fc005d6c718       csi-hostpath-attacher-0
	f1af5a7631fd8       420193b27261a       5 minutes ago       Exited              create                                   0                   227869144a2bc       ingress-nginx-admission-create-z9n2d
	8bfa2d0e6eb64       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   339b9aa4cda63       csi-hostpathplugin-x78bc
	248bb86e19b9c       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   872b1e723d134       snapshot-controller-56fcc65765-rtlld
	2b50752f3480f       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   173bfc9aa6515       snapshot-controller-56fcc65765-r6bk5
	4299cad1234fe       77bdba588b953       5 minutes ago       Running             yakd                                     0                   8823ab6f5523d       yakd-dashboard-67d98fc6b-j5kc9
	44d340eed7d26       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   4e0f2a66405f2       volcano-controllers-56675bb4d5-cmlsq
	87a56a900b6ba       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   3d99c3940f750       registry-proxy-vt26f
	208797c1ab34d       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   09a83ba3b4243       registry-66c9cd494c-mr26g
	9d30b0b4699dc       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   a3afef92a909d       metrics-server-84c5f94fbc-qpwkg
	ec0d2d026861b       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   b76c968889d93       cloud-spanner-emulator-769b77f747-d77bb
	b9bd5e91f0925       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   73a4aa900f8d0       local-path-provisioner-86d989889c-m2n4x
	066d4d6e39506       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   e2b4d6f261996       nvidia-device-plugin-daemonset-pst9m
	d19be4b6591cb       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   6e70ae4545181       coredns-7c65d6cfc9-jvb97
	cd651ab1307f5       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   10958739479c5       kube-ingress-dns-minikube
	6782f9539b5d1       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   2a21b0d0c9f94       storage-provisioner
	882d6195d5f55       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   e929d073eb3ab       kindnet-gffmg
	c2b6d4875f232       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   f9d61b365dc21       kube-proxy-9r82v
	bd1f1c74260c2       27e3830e14027       6 minutes ago       Running             etcd                                     0                   820b55eb2b63e       etcd-addons-388835
	51a1b225a480e       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   63b6675a981fa       kube-scheduler-addons-388835
	82c971605db8b       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   b3936ea9f319f       kube-controller-manager-addons-388835
	bffe567f7dff4       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   d7367b707814d       kube-apiserver-addons-388835
	
	
	==> containerd <==
	Sep 20 19:27:46 addons-388835 containerd[811]: time="2024-09-20T19:27:46.600638309Z" level=info msg="CreateContainer within sandbox \"73de0502464cc1173980de8d01661a94d5883fda26d2de283fa207d4070e6fc1\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 20 19:27:46 addons-388835 containerd[811]: time="2024-09-20T19:27:46.623786739Z" level=info msg="CreateContainer within sandbox \"73de0502464cc1173980de8d01661a94d5883fda26d2de283fa207d4070e6fc1\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\""
	Sep 20 19:27:46 addons-388835 containerd[811]: time="2024-09-20T19:27:46.624811638Z" level=info msg="StartContainer for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\""
	Sep 20 19:27:46 addons-388835 containerd[811]: time="2024-09-20T19:27:46.678213934Z" level=info msg="StartContainer for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" returns successfully"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.196161397Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"fb2b3e35b8f33cef8e513cf46987c564472c16a24c18adbd399086cba7fce025\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.206019758Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"4c55bab16e3f09517296a59bcfde252c2eb11ba391d261db3d59f17310bf327f\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.227359152Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"ed4f666d4e1c70b9e3f76b1d2c4f39bb879c2ed89372fe1d12b59af4ec57dca2\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.239816569Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"adbb157b8d9e7d515583ff128c659d60394a1aa72a86a86c291226b396a961b7\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.251473306Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"59e712a7efbb74a0fb3fedef6d124e5ea59bbad252f053e9b83c8e2f96fe8b80\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.262409464Z" level=error msg="ExecSync for \"e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e\" failed" error="failed to exec in container: failed to start exec \"1ceee00e054aa854398d518cd6d5aa0b60516d5f91efbd23869fd6c976f16ddf\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.375878679Z" level=info msg="shim disconnected" id=e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e namespace=k8s.io
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.375955692Z" level=warning msg="cleaning up after shim disconnected" id=e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e namespace=k8s.io
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.375967016Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.832674012Z" level=info msg="RemoveContainer for \"f74e3c1f7d36e7a85abba09886a550e3b81b51b9f81cc34a7cef52960074bb03\""
	Sep 20 19:27:48 addons-388835 containerd[811]: time="2024-09-20T19:27:48.853462262Z" level=info msg="RemoveContainer for \"f74e3c1f7d36e7a85abba09886a550e3b81b51b9f81cc34a7cef52960074bb03\" returns successfully"
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.529618823Z" level=info msg="RemoveContainer for \"dc86eddde76d99e76bd9200f66a04c7a9236bdbf2ad5e7089ac857c7a50194b1\""
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.536915970Z" level=info msg="RemoveContainer for \"dc86eddde76d99e76bd9200f66a04c7a9236bdbf2ad5e7089ac857c7a50194b1\" returns successfully"
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.538964923Z" level=info msg="StopPodSandbox for \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\""
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.546709929Z" level=info msg="TearDown network for sandbox \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\" successfully"
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.546749633Z" level=info msg="StopPodSandbox for \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\" returns successfully"
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.547466513Z" level=info msg="RemovePodSandbox for \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\""
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.547511042Z" level=info msg="Forcibly stopping sandbox \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\""
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.555133595Z" level=info msg="TearDown network for sandbox \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\" successfully"
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.561815303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 20 19:27:55 addons-388835 containerd[811]: time="2024-09-20T19:27:55.562079843Z" level=info msg="RemovePodSandbox \"116dfa061d4c957c07b0355d68138f6288ba492cb4931b683a9cea83a6e3a946\" returns successfully"
	
	
	==> coredns [d19be4b6591cb59a7a36378e755bf9363b7b0f40cbcbe5f8ea8758d748263633] <==
	[INFO] 10.244.0.8:33732 - 56325 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000185082s
	[INFO] 10.244.0.8:41436 - 12780 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002309227s
	[INFO] 10.244.0.8:41436 - 46062 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002640039s
	[INFO] 10.244.0.8:57847 - 29790 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000122502s
	[INFO] 10.244.0.8:57847 - 57946 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000161804s
	[INFO] 10.244.0.8:43789 - 56552 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000096246s
	[INFO] 10.244.0.8:43789 - 33260 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000120401s
	[INFO] 10.244.0.8:37982 - 45308 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050084s
	[INFO] 10.244.0.8:37982 - 10235 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000034651s
	[INFO] 10.244.0.8:49825 - 10958 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052028s
	[INFO] 10.244.0.8:49825 - 57025 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00012041s
	[INFO] 10.244.0.8:34845 - 41168 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001713971s
	[INFO] 10.244.0.8:34845 - 34770 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001767526s
	[INFO] 10.244.0.8:47467 - 14851 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000055007s
	[INFO] 10.244.0.8:47467 - 48125 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00003831s
	[INFO] 10.244.0.24:46452 - 46300 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000146683s
	[INFO] 10.244.0.24:53873 - 57087 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000499576s
	[INFO] 10.244.0.24:32938 - 54178 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00014857s
	[INFO] 10.244.0.24:52312 - 31485 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00010025s
	[INFO] 10.244.0.24:47337 - 32392 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000106945s
	[INFO] 10.244.0.24:53773 - 31124 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101415s
	[INFO] 10.244.0.24:58875 - 46757 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.005655176s
	[INFO] 10.244.0.24:43079 - 8035 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004549202s
	[INFO] 10.244.0.24:44843 - 21359 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002391277s
	[INFO] 10.244.0.24:47854 - 62462 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001906307s
	
	
	==> describe nodes <==
	Name:               addons-388835
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-388835
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=addons-388835
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T19_23_56_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-388835
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-388835"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 19:23:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-388835
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 19:30:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 19:26:59 +0000   Fri, 20 Sep 2024 19:23:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 19:26:59 +0000   Fri, 20 Sep 2024 19:23:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 19:26:59 +0000   Fri, 20 Sep 2024 19:23:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 19:26:59 +0000   Fri, 20 Sep 2024 19:23:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-388835
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 27dc1953eb6b420990f4dde3f28e183c
	  System UUID:                7ea21304-f1c6-4bc7-a11c-37f70672c453
	  Boot ID:                    cfeac633-1b4b-4878-a7d1-bdd76da68a0f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-d77bb     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-4njwd                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-wwptb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-bfc7b    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m54s
	  kube-system                 coredns-7c65d6cfc9-jvb97                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m2s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-x78bc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-388835                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-gffmg                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-388835                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-controller-manager-addons-388835       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-9r82v                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-388835                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-qpwkg             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-pst9m        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-mr26g                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-vt26f                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-r6bk5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-rtlld        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  local-path-storage          local-path-provisioner-86d989889c-m2n4x     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-p5jsb          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-cmlsq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-g72p9          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-j5kc9              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 6m1s  kube-proxy       
	  Normal   Starting                 6m8s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s  kubelet          Node addons-388835 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s  kubelet          Node addons-388835 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s  kubelet          Node addons-388835 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s  node-controller  Node addons-388835 event: Registered Node addons-388835 in Controller
	
	
	==> dmesg <==
	[Sep20 18:22] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000025 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001031] FS-Cache: O-cookie d=000000007b04e949{9P.session} n=00000000fd4f4036
	[  +0.001114] FS-Cache: O-key=[10] '34323936373734333137'
	[  +0.000820] FS-Cache: N-cookie c=00000026 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000956] FS-Cache: N-cookie d=000000007b04e949{9P.session} n=00000000b2f1fccb
	[  +0.001112] FS-Cache: N-key=[10] '34323936373734333137'
	
	
	==> etcd [bd1f1c74260c2deaa8e08f218a7d81db74971595a945dd90240f7a05673ba5c0] <==
	{"level":"info","ts":"2024-09-20T19:23:48.827723Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-20T19:23:49.550341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-20T19:23:49.550389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-20T19:23:49.550428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-20T19:23:49.550449Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-20T19:23:49.550459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T19:23:49.550469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-20T19:23:49.550477Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-20T19:23:49.555769Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-388835 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-20T19:23:49.555814Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:23:49.556029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-20T19:23:49.556374Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:23:49.556676Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:23:49.557587Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-20T19:23:49.557984Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:23:49.558061Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:23:49.571000Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-20T19:23:49.585628Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-20T19:23:49.574345Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-20T19:23:49.574396Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-20T19:23:49.586962Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-20T19:24:01.034011Z","caller":"traceutil/trace.go:171","msg":"trace[1846454628] linearizableReadLoop","detail":"{readStateIndex:337; appliedIndex:335; }","duration":"122.36152ms","start":"2024-09-20T19:24:00.902923Z","end":"2024-09-20T19:24:01.025285Z","steps":["trace[1846454628] 'read index received'  (duration: 13.891034ms)","trace[1846454628] 'applied index is now lower than readState.Index'  (duration: 108.469305ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-20T19:24:01.035421Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.463794ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-proxy-9r82v\" ","response":"range_response_count:1 size:3426"}
	{"level":"info","ts":"2024-09-20T19:24:01.035473Z","caller":"traceutil/trace.go:171","msg":"trace[1443404074] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-9r82v; range_end:; response_count:1; response_revision:329; }","duration":"132.544178ms","start":"2024-09-20T19:24:00.902917Z","end":"2024-09-20T19:24:01.035462Z","steps":["trace[1443404074] 'agreement among raft nodes before linearized reading'  (duration: 131.265389ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-20T19:24:01.044179Z","caller":"traceutil/trace.go:171","msg":"trace[411689917] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"132.48927ms","start":"2024-09-20T19:24:00.902814Z","end":"2024-09-20T19:24:01.035304Z","steps":["trace[411689917] 'process raft request'  (duration: 55.823402ms)","trace[411689917] 'attach lease to kv pair' {req_type:put; key:/registry/events/kube-system/kindnet-gffmg.17f70a3d625eb234; req_size:677; } (duration: 66.456454ms)"],"step_count":2}
	
	
	==> gcp-auth [7ab95d06facff412e7e36e7d1522c98f3d5d3375b90505b122815b36ca5dae35] <==
	2024/09/20 19:26:43 GCP Auth Webhook started!
	2024/09/20 19:27:00 Ready to marshal response ...
	2024/09/20 19:27:00 Ready to write response ...
	2024/09/20 19:27:01 Ready to marshal response ...
	2024/09/20 19:27:01 Ready to write response ...
	
	
	==> kernel <==
	 19:30:03 up  3:12,  0 users,  load average: 0.65, 1.43, 2.17
	Linux addons-388835 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [882d6195d5f55e000dc1e1b214c0ba69f1613d46a5e48746903406d8e970cec1] <==
	I0920 19:28:02.520434       1 main.go:299] handling current node
	I0920 19:28:12.524108       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:28:12.524410       1 main.go:299] handling current node
	I0920 19:28:22.519666       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:28:22.519702       1 main.go:299] handling current node
	I0920 19:28:32.519749       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:28:32.519789       1 main.go:299] handling current node
	I0920 19:28:42.527606       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:28:42.527644       1 main.go:299] handling current node
	I0920 19:28:52.528406       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:28:52.528442       1 main.go:299] handling current node
	I0920 19:29:02.520274       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:02.520313       1 main.go:299] handling current node
	I0920 19:29:12.527404       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:12.527442       1 main.go:299] handling current node
	I0920 19:29:22.526640       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:22.526675       1 main.go:299] handling current node
	I0920 19:29:32.528714       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:32.528753       1 main.go:299] handling current node
	I0920 19:29:42.526848       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:42.526884       1 main.go:299] handling current node
	I0920 19:29:52.527494       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:29:52.527528       1 main.go:299] handling current node
	I0920 19:30:02.520170       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0920 19:30:02.520209       1 main.go:299] handling current node
	
	
	==> kube-apiserver [bffe567f7dff4cc859787ecbdac8fde08be6dbb8168cb4cd127d07bb0d86eaef] <==
	W0920 19:25:14.261809       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:15.331034       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:16.194057       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.141.98:443: connect: connection refused
	E0920 19:25:16.194097       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.141.98:443: connect: connection refused" logger="UnhandledError"
	W0920 19:25:16.195828       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:16.225821       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.141.98:443: connect: connection refused
	E0920 19:25:16.225859       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.141.98:443: connect: connection refused" logger="UnhandledError"
	W0920 19:25:16.227486       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:16.430622       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:17.510589       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:18.521373       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:19.620566       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:20.722281       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:21.785650       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:22.886505       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:23.945407       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:25.007284       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.172.230:443: connect: connection refused
	W0920 19:25:36.077651       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.141.98:443: connect: connection refused
	E0920 19:25:36.077699       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.141.98:443: connect: connection refused" logger="UnhandledError"
	W0920 19:26:16.208975       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.141.98:443: connect: connection refused
	E0920 19:26:16.209022       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.141.98:443: connect: connection refused" logger="UnhandledError"
	W0920 19:26:16.232909       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.110.141.98:443: connect: connection refused
	E0920 19:26:16.232955       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.110.141.98:443: connect: connection refused" logger="UnhandledError"
	I0920 19:27:00.757384       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0920 19:27:00.796360       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [82c971605db8b079c6182a77500daefad15d43c64c5631e1b92ca77e4104b717] <==
	I0920 19:26:16.255217       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:16.256941       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:16.270492       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:16.273584       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:16.284732       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:17.579856       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:17.600712       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:18.593038       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:18.706766       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:19.613175       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:19.698457       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:19.712955       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:19.723147       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:19.729430       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0920 19:26:20.621108       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:20.630589       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:20.639053       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0920 19:26:43.697504       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="23.635878ms"
	I0920 19:26:43.697627       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="82.904µs"
	I0920 19:26:49.019336       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 19:26:49.062175       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0920 19:26:50.017695       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 19:26:50.050877       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0920 19:26:59.031695       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-388835"
	I0920 19:27:00.454355       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [c2b6d4875f2320d671b24314b55e7c9764efc3097e736601d92efbd3b870d94b] <==
	I0920 19:24:01.837170       1 server_linux.go:66] "Using iptables proxy"
	I0920 19:24:01.985391       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0920 19:24:01.985480       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0920 19:24:02.068995       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0920 19:24:02.069060       1 server_linux.go:169] "Using iptables Proxier"
	I0920 19:24:02.074960       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0920 19:24:02.075812       1 server.go:483] "Version info" version="v1.31.1"
	I0920 19:24:02.075862       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0920 19:24:02.082745       1 config.go:328] "Starting node config controller"
	I0920 19:24:02.082776       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0920 19:24:02.083529       1 config.go:199] "Starting service config controller"
	I0920 19:24:02.083540       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0920 19:24:02.083558       1 config.go:105] "Starting endpoint slice config controller"
	I0920 19:24:02.083563       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0920 19:24:02.182971       1 shared_informer.go:320] Caches are synced for node config
	I0920 19:24:02.184126       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0920 19:24:02.184183       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [51a1b225a480eca20457aa158e41d2fe674f10ab448cef879ff7b3fd4e65724c] <==
	W0920 19:23:53.089184       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0920 19:23:53.089201       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.089274       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 19:23:53.089294       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.089352       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 19:23:53.089369       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.089432       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 19:23:53.089450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.089511       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0920 19:23:53.089528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.088789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:23:53.089571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:53.972967       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0920 19:23:53.973266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:54.021341       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 19:23:54.021571       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:54.041325       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 19:23:54.041511       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:54.062638       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 19:23:54.062914       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:54.090407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 19:23:54.090698       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0920 19:23:54.133889       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 19:23:54.134736       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I0920 19:23:54.578398       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 20 19:27:55 addons-388835 kubelet[1488]: I0920 19:27:55.527887    1488 scope.go:117] "RemoveContainer" containerID="dc86eddde76d99e76bd9200f66a04c7a9236bdbf2ad5e7089ac857c7a50194b1"
	Sep 20 19:28:08 addons-388835 kubelet[1488]: I0920 19:28:08.464823    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:28:08 addons-388835 kubelet[1488]: E0920 19:28:08.465065    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:28:09 addons-388835 kubelet[1488]: I0920 19:28:09.464430    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-mr26g" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:28:11 addons-388835 kubelet[1488]: I0920 19:28:11.464094    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-pst9m" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:28:18 addons-388835 kubelet[1488]: I0920 19:28:18.463756    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vt26f" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:28:22 addons-388835 kubelet[1488]: I0920 19:28:22.464448    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:28:22 addons-388835 kubelet[1488]: E0920 19:28:22.464659    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:28:34 addons-388835 kubelet[1488]: I0920 19:28:34.464631    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:28:34 addons-388835 kubelet[1488]: E0920 19:28:34.464836    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:28:49 addons-388835 kubelet[1488]: I0920 19:28:49.464758    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:28:49 addons-388835 kubelet[1488]: E0920 19:28:49.464951    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:29:03 addons-388835 kubelet[1488]: I0920 19:29:03.464101    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:29:03 addons-388835 kubelet[1488]: E0920 19:29:03.464299    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:29:14 addons-388835 kubelet[1488]: I0920 19:29:14.464657    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:29:14 addons-388835 kubelet[1488]: E0920 19:29:14.464842    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:29:18 addons-388835 kubelet[1488]: I0920 19:29:18.464475    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-pst9m" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:29:27 addons-388835 kubelet[1488]: I0920 19:29:27.464435    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:29:27 addons-388835 kubelet[1488]: E0920 19:29:27.464640    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:29:36 addons-388835 kubelet[1488]: I0920 19:29:36.463727    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-mr26g" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:29:40 addons-388835 kubelet[1488]: I0920 19:29:40.464132    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:29:40 addons-388835 kubelet[1488]: E0920 19:29:40.464355    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	Sep 20 19:29:44 addons-388835 kubelet[1488]: I0920 19:29:44.464320    1488 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-vt26f" secret="" err="secret \"gcp-auth\" not found"
	Sep 20 19:29:55 addons-388835 kubelet[1488]: I0920 19:29:55.465811    1488 scope.go:117] "RemoveContainer" containerID="e6d902f127e8625a6755f6b99f36adac175a6105c5ced4f3d89708ad93de937e"
	Sep 20 19:29:55 addons-388835 kubelet[1488]: E0920 19:29:55.466537    1488 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-4njwd_gadget(a5380331-5c07-4ad6-a2d8-b954f4e41e9e)\"" pod="gadget/gadget-4njwd" podUID="a5380331-5c07-4ad6-a2d8-b954f4e41e9e"
	
	
	==> storage-provisioner [6782f9539b5d1c0d159f1cd01a692fb2cea5f97bbe291e36023147697170f8b1] <==
	I0920 19:24:05.083718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 19:24:05.354955       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 19:24:05.355090       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 19:24:05.430052       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 19:24:05.430281       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-388835_5f88fecc-a596-4634-a141-1ceb18fe8a6a!
	I0920 19:24:05.431221       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad3b2e2a-c1ff-48fc-bca6-3fff62a2b018", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-388835_5f88fecc-a596-4634-a141-1ceb18fe8a6a became leader
	I0920 19:24:05.531744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-388835_5f88fecc-a596-4634-a141-1ceb18fe8a6a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-388835 -n addons-388835
helpers_test.go:261: (dbg) Run:  kubectl --context addons-388835 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-z9n2d ingress-nginx-admission-patch-b8b2z test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-388835 describe pod ingress-nginx-admission-create-z9n2d ingress-nginx-admission-patch-b8b2z test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-388835 describe pod ingress-nginx-admission-create-z9n2d ingress-nginx-admission-patch-b8b2z test-job-nginx-0: exit status 1 (86.735382ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z9n2d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-b8b2z" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-388835 describe pod ingress-nginx-admission-create-z9n2d ingress-nginx-admission-patch-b8b2z test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-060703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0920 20:13:42.905133  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-060703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m13.162060392s)

                                                
                                                
-- stdout --
	* [old-k8s-version-060703] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-060703" primary control-plane node in "old-k8s-version-060703" cluster
	* Pulling base image v0.0.45-1726589491-19662 ...
	* Restarting existing docker container for "old-k8s-version-060703" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-060703 addons enable metrics-server
	
	* Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:13:29.903108  946192 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:13:29.903393  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:13:29.903423  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:13:29.903445  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:13:29.903942  946192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 20:13:29.904627  946192 out.go:352] Setting JSON to false
	I0920 20:13:29.905935  946192 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14161,"bootTime":1726849049,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 20:13:29.906081  946192 start.go:139] virtualization:  
	I0920 20:13:29.909305  946192 out.go:177] * [old-k8s-version-060703] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 20:13:29.912333  946192 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 20:13:29.912556  946192 notify.go:220] Checking for updates...
	I0920 20:13:29.916633  946192 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:13:29.919014  946192 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 20:13:29.921114  946192 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 20:13:29.923417  946192 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 20:13:29.925595  946192 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:13:29.928541  946192 config.go:182] Loaded profile config "old-k8s-version-060703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 20:13:29.931463  946192 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0920 20:13:29.933468  946192 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:13:29.966794  946192 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:13:29.966916  946192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:13:30.060983  946192 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:13:30.012194187 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:13:30.061260  946192 docker.go:318] overlay module found
	I0920 20:13:30.070836  946192 out.go:177] * Using the docker driver based on existing profile
	I0920 20:13:30.073098  946192 start.go:297] selected driver: docker
	I0920 20:13:30.073134  946192 start.go:901] validating driver "docker" against &{Name:old-k8s-version-060703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-060703 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:13:30.073266  946192 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:13:30.074075  946192 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:13:30.192350  946192 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:13:30.1771363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:13:30.192790  946192 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:13:30.192829  946192 cni.go:84] Creating CNI manager for ""
	I0920 20:13:30.192872  946192 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 20:13:30.192927  946192 start.go:340] cluster config:
	{Name:old-k8s-version-060703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-060703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:13:30.195640  946192 out.go:177] * Starting "old-k8s-version-060703" primary control-plane node in "old-k8s-version-060703" cluster
	I0920 20:13:30.198867  946192 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 20:13:30.201126  946192 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 20:13:30.203113  946192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 20:13:30.203184  946192 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 20:13:30.203198  946192 cache.go:56] Caching tarball of preloaded images
	I0920 20:13:30.203288  946192 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 20:13:30.203293  946192 preload.go:172] Found /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 20:13:30.203316  946192 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0920 20:13:30.203451  946192 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/config.json ...
	W0920 20:13:30.233915  946192 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 20:13:30.233944  946192 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 20:13:30.234041  946192 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 20:13:30.234067  946192 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 20:13:30.234072  946192 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 20:13:30.234082  946192 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 20:13:30.234093  946192 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 20:13:30.438714  946192 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 20:13:30.438756  946192 cache.go:194] Successfully downloaded all kic artifacts
	I0920 20:13:30.438787  946192 start.go:360] acquireMachinesLock for old-k8s-version-060703: {Name:mk02a1fb6f003000f73597c60551f16747311345 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:13:30.438854  946192 start.go:364] duration metric: took 45.227µs to acquireMachinesLock for "old-k8s-version-060703"
	I0920 20:13:30.438881  946192 start.go:96] Skipping create...Using existing machine configuration
	I0920 20:13:30.438895  946192 fix.go:54] fixHost starting: 
	I0920 20:13:30.439200  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:30.457888  946192 fix.go:112] recreateIfNeeded on old-k8s-version-060703: state=Stopped err=<nil>
	W0920 20:13:30.457923  946192 fix.go:138] unexpected machine state, will restart: <nil>
	I0920 20:13:30.460471  946192 out.go:177] * Restarting existing docker container for "old-k8s-version-060703" ...
	I0920 20:13:30.462510  946192 cli_runner.go:164] Run: docker start old-k8s-version-060703
	I0920 20:13:30.803002  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:30.826012  946192 kic.go:430] container "old-k8s-version-060703" state is running.
	I0920 20:13:30.826475  946192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-060703
	I0920 20:13:30.852568  946192 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/config.json ...
	I0920 20:13:30.852807  946192 machine.go:93] provisionDockerMachine start ...
	I0920 20:13:30.852905  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:30.886229  946192 main.go:141] libmachine: Using SSH client type: native
	I0920 20:13:30.886672  946192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0920 20:13:30.886690  946192 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 20:13:30.887389  946192 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41704->127.0.0.1:33433: read: connection reset by peer
	I0920 20:13:34.055211  946192 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-060703
	
	I0920 20:13:34.055238  946192 ubuntu.go:169] provisioning hostname "old-k8s-version-060703"
	I0920 20:13:34.055350  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:34.084623  946192 main.go:141] libmachine: Using SSH client type: native
	I0920 20:13:34.084914  946192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0920 20:13:34.084928  946192 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-060703 && echo "old-k8s-version-060703" | sudo tee /etc/hostname
	I0920 20:13:34.265783  946192 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-060703
	
	I0920 20:13:34.266008  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:34.288326  946192 main.go:141] libmachine: Using SSH client type: native
	I0920 20:13:34.288686  946192 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0920 20:13:34.288716  946192 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-060703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-060703/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-060703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:13:34.455058  946192 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:13:34.455086  946192 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-734403/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-734403/.minikube}
	I0920 20:13:34.455143  946192 ubuntu.go:177] setting up certificates
	I0920 20:13:34.455152  946192 provision.go:84] configureAuth start
	I0920 20:13:34.455281  946192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-060703
	I0920 20:13:34.473473  946192 provision.go:143] copyHostCerts
	I0920 20:13:34.473546  946192 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem, removing ...
	I0920 20:13:34.473562  946192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem
	I0920 20:13:34.473638  946192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem (1679 bytes)
	I0920 20:13:34.473798  946192 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem, removing ...
	I0920 20:13:34.473810  946192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem
	I0920 20:13:34.473843  946192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem (1078 bytes)
	I0920 20:13:34.473910  946192 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem, removing ...
	I0920 20:13:34.473920  946192 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem
	I0920 20:13:34.473945  946192 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem (1123 bytes)
	I0920 20:13:34.474009  946192 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-060703 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-060703]
	I0920 20:13:34.742259  946192 provision.go:177] copyRemoteCerts
	I0920 20:13:34.742719  946192 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:13:34.742788  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:34.774383  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:34.887503  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 20:13:34.918967  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0920 20:13:34.956038  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0920 20:13:34.995579  946192 provision.go:87] duration metric: took 540.408023ms to configureAuth
	I0920 20:13:34.995608  946192 ubuntu.go:193] setting minikube options for container-runtime
	I0920 20:13:34.995844  946192 config.go:182] Loaded profile config "old-k8s-version-060703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 20:13:34.995861  946192 machine.go:96] duration metric: took 4.143038193s to provisionDockerMachine
	I0920 20:13:34.995869  946192 start.go:293] postStartSetup for "old-k8s-version-060703" (driver="docker")
	I0920 20:13:34.995896  946192 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:13:34.995968  946192 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:13:34.996039  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:35.028308  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:35.144405  946192 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:13:35.149331  946192 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 20:13:35.149413  946192 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 20:13:35.149441  946192 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 20:13:35.149469  946192 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 20:13:35.149493  946192 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/addons for local assets ...
	I0920 20:13:35.149567  946192 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/files for local assets ...
	I0920 20:13:35.149688  946192 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem -> 7397872.pem in /etc/ssl/certs
	I0920 20:13:35.149834  946192 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 20:13:35.160485  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem --> /etc/ssl/certs/7397872.pem (1708 bytes)
	I0920 20:13:35.187226  946192 start.go:296] duration metric: took 191.324566ms for postStartSetup
	I0920 20:13:35.187345  946192 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:13:35.187430  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:35.204446  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:35.305924  946192 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 20:13:35.310518  946192 fix.go:56] duration metric: took 4.871615206s for fixHost
	I0920 20:13:35.310585  946192 start.go:83] releasing machines lock for "old-k8s-version-060703", held for 4.87171681s
	I0920 20:13:35.310686  946192 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-060703
	I0920 20:13:35.327933  946192 ssh_runner.go:195] Run: cat /version.json
	I0920 20:13:35.328000  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:35.328133  946192 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:13:35.328194  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:35.352250  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:35.360846  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:35.585958  946192 ssh_runner.go:195] Run: systemctl --version
	I0920 20:13:35.591431  946192 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 20:13:35.595922  946192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 20:13:35.616015  946192 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 20:13:35.616131  946192 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:13:35.625723  946192 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0920 20:13:35.625756  946192 start.go:495] detecting cgroup driver to use...
	I0920 20:13:35.625790  946192 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:13:35.625872  946192 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 20:13:35.640717  946192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 20:13:35.652733  946192 docker.go:217] disabling cri-docker service (if available) ...
	I0920 20:13:35.652793  946192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 20:13:35.666181  946192 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 20:13:35.678466  946192 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 20:13:35.772727  946192 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 20:13:35.859075  946192 docker.go:233] disabling docker service ...
	I0920 20:13:35.859198  946192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 20:13:35.872864  946192 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 20:13:35.884614  946192 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 20:13:35.978637  946192 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 20:13:36.089956  946192 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 20:13:36.105135  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:13:36.124830  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0920 20:13:36.136684  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 20:13:36.148046  946192 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 20:13:36.148164  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 20:13:36.159017  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:13:36.170804  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 20:13:36.180804  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:13:36.191229  946192 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:13:36.201194  946192 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 20:13:36.212509  946192 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:13:36.221850  946192 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:13:36.233067  946192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:13:36.325334  946192 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 20:13:36.489606  946192 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 20:13:36.489682  946192 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 20:13:36.494002  946192 start.go:563] Will wait 60s for crictl version
	I0920 20:13:36.494080  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:13:36.498033  946192 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:13:36.538039  946192 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 20:13:36.538162  946192 ssh_runner.go:195] Run: containerd --version
	I0920 20:13:36.561966  946192 ssh_runner.go:195] Run: containerd --version
	I0920 20:13:36.589269  946192 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0920 20:13:36.591252  946192 cli_runner.go:164] Run: docker network inspect old-k8s-version-060703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 20:13:36.612944  946192 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0920 20:13:36.616723  946192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:13:36.628211  946192 kubeadm.go:883] updating cluster {Name:old-k8s-version-060703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-060703 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:13:36.628342  946192 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 20:13:36.628411  946192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:13:36.666885  946192 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 20:13:36.666912  946192 containerd.go:534] Images already preloaded, skipping extraction
	I0920 20:13:36.666980  946192 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:13:36.704065  946192 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 20:13:36.704091  946192 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:13:36.704100  946192 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0920 20:13:36.704274  946192 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-060703 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-060703 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:13:36.704362  946192 ssh_runner.go:195] Run: sudo crictl info
	I0920 20:13:36.743455  946192 cni.go:84] Creating CNI manager for ""
	I0920 20:13:36.743483  946192 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 20:13:36.743494  946192 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:13:36.743514  946192 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-060703 NodeName:old-k8s-version-060703 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0920 20:13:36.743657  946192 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-060703"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:13:36.743730  946192 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0920 20:13:36.753272  946192 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:13:36.753346  946192 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:13:36.762741  946192 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0920 20:13:36.785205  946192 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:13:36.804625  946192 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0920 20:13:36.825083  946192 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0920 20:13:36.828764  946192 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:13:36.845079  946192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:13:36.934116  946192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:13:36.949206  946192 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703 for IP: 192.168.76.2
	I0920 20:13:36.949282  946192 certs.go:194] generating shared ca certs ...
	I0920 20:13:36.949313  946192 certs.go:226] acquiring lock for ca certs: {Name:mk05671cd2fa7cea0f374261a29f5dc2649893f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:13:36.949519  946192 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key
	I0920 20:13:36.949629  946192 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key
	I0920 20:13:36.949669  946192 certs.go:256] generating profile certs ...
	I0920 20:13:36.949831  946192 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.key
	I0920 20:13:36.949940  946192 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/apiserver.key.6cd8aad4
	I0920 20:13:36.950022  946192 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/proxy-client.key
	I0920 20:13:36.950189  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787.pem (1338 bytes)
	W0920 20:13:36.950254  946192 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787_empty.pem, impossibly tiny 0 bytes
	I0920 20:13:36.950280  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:13:36.950365  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:13:36.950441  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:13:36.950502  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem (1679 bytes)
	I0920 20:13:36.950597  946192 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem (1708 bytes)
	I0920 20:13:36.951584  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:13:36.983559  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 20:13:37.015310  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:13:37.072676  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:13:37.101053  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0920 20:13:37.133943  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 20:13:37.162002  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:13:37.189651  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0920 20:13:37.216176  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787.pem --> /usr/share/ca-certificates/739787.pem (1338 bytes)
	I0920 20:13:37.243312  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem --> /usr/share/ca-certificates/7397872.pem (1708 bytes)
	I0920 20:13:37.270091  946192 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:13:37.294881  946192 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:13:37.315743  946192 ssh_runner.go:195] Run: openssl version
	I0920 20:13:37.321675  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7397872.pem && ln -fs /usr/share/ca-certificates/7397872.pem /etc/ssl/certs/7397872.pem"
	I0920 20:13:37.331787  946192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7397872.pem
	I0920 20:13:37.335735  946192 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 19:33 /usr/share/ca-certificates/7397872.pem
	I0920 20:13:37.335802  946192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7397872.pem
	I0920 20:13:37.343048  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7397872.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 20:13:37.352475  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:13:37.362202  946192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:13:37.366749  946192 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:23 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:13:37.366829  946192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:13:37.374124  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:13:37.383448  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739787.pem && ln -fs /usr/share/ca-certificates/739787.pem /etc/ssl/certs/739787.pem"
	I0920 20:13:37.393122  946192 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739787.pem
	I0920 20:13:37.397241  946192 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 19:33 /usr/share/ca-certificates/739787.pem
	I0920 20:13:37.397332  946192 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739787.pem
	I0920 20:13:37.404640  946192 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/739787.pem /etc/ssl/certs/51391683.0"
	I0920 20:13:37.413915  946192 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:13:37.420778  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0920 20:13:37.428643  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0920 20:13:37.435811  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0920 20:13:37.442960  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0920 20:13:37.450103  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0920 20:13:37.457218  946192 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0920 20:13:37.464400  946192 kubeadm.go:392] StartCluster: {Name:old-k8s-version-060703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-060703 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:13:37.464516  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 20:13:37.464575  946192 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 20:13:37.515551  946192 cri.go:89] found id: "fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:13:37.515578  946192 cri.go:89] found id: "dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:13:37.515584  946192 cri.go:89] found id: "f5067eeb74f8741f717dfa76d1cacb94c04f025fbf41f7edfbf5a036fcd57509"
	I0920 20:13:37.515589  946192 cri.go:89] found id: "7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:13:37.515592  946192 cri.go:89] found id: "115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:13:37.515606  946192 cri.go:89] found id: "9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:13:37.515609  946192 cri.go:89] found id: "56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:13:37.515613  946192 cri.go:89] found id: "56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:13:37.515616  946192 cri.go:89] found id: ""
	I0920 20:13:37.515672  946192 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0920 20:13:37.528530  946192 cri.go:116] JSON = null
	W0920 20:13:37.528579  946192 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0920 20:13:37.528662  946192 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:13:37.537800  946192 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0920 20:13:37.537821  946192 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0920 20:13:37.537942  946192 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0920 20:13:37.547371  946192 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0920 20:13:37.548038  946192 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-060703" does not appear in /home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 20:13:37.548319  946192 kubeconfig.go:62] /home/jenkins/minikube-integration/19678-734403/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-060703" cluster setting kubeconfig missing "old-k8s-version-060703" context setting]
	I0920 20:13:37.548803  946192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/kubeconfig: {Name:mk2c4e41774b0706b15fe3f774308577d8981408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:13:37.550258  946192 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0920 20:13:37.559074  946192 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0920 20:13:37.559110  946192 kubeadm.go:597] duration metric: took 21.282065ms to restartPrimaryControlPlane
	I0920 20:13:37.559120  946192 kubeadm.go:394] duration metric: took 94.729536ms to StartCluster
	I0920 20:13:37.559155  946192 settings.go:142] acquiring lock: {Name:mk0c46dfbbc36539bac54a4b44b23e5293c710e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:13:37.559230  946192 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 20:13:37.560160  946192 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/kubeconfig: {Name:mk2c4e41774b0706b15fe3f774308577d8981408 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:13:37.560376  946192 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 20:13:37.560755  946192 config.go:182] Loaded profile config "old-k8s-version-060703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 20:13:37.560741  946192 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0920 20:13:37.560837  946192 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-060703"
	I0920 20:13:37.560859  946192 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-060703"
	W0920 20:13:37.560866  946192 addons.go:243] addon storage-provisioner should already be in state true
	I0920 20:13:37.560891  946192 host.go:66] Checking if "old-k8s-version-060703" exists ...
	I0920 20:13:37.561274  946192 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-060703"
	I0920 20:13:37.561309  946192 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-060703"
	I0920 20:13:37.561653  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:37.561689  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:37.562802  946192 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-060703"
	I0920 20:13:37.562832  946192 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-060703"
	W0920 20:13:37.562840  946192 addons.go:243] addon metrics-server should already be in state true
	I0920 20:13:37.562866  946192 host.go:66] Checking if "old-k8s-version-060703" exists ...
	I0920 20:13:37.563562  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:37.567675  946192 out.go:177] * Verifying Kubernetes components...
	I0920 20:13:37.567885  946192 addons.go:69] Setting dashboard=true in profile "old-k8s-version-060703"
	I0920 20:13:37.567908  946192 addons.go:234] Setting addon dashboard=true in "old-k8s-version-060703"
	W0920 20:13:37.567916  946192 addons.go:243] addon dashboard should already be in state true
	I0920 20:13:37.567948  946192 host.go:66] Checking if "old-k8s-version-060703" exists ...
	I0920 20:13:37.568565  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:37.570655  946192 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:13:37.603934  946192 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-060703"
	W0920 20:13:37.603975  946192 addons.go:243] addon default-storageclass should already be in state true
	I0920 20:13:37.604004  946192 host.go:66] Checking if "old-k8s-version-060703" exists ...
	I0920 20:13:37.604439  946192 cli_runner.go:164] Run: docker container inspect old-k8s-version-060703 --format={{.State.Status}}
	I0920 20:13:37.612379  946192 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0920 20:13:37.618370  946192 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:13:37.618396  946192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0920 20:13:37.618478  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:37.638364  946192 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0920 20:13:37.642385  946192 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0920 20:13:37.642421  946192 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0920 20:13:37.642507  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:37.649906  946192 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0920 20:13:37.649935  946192 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0920 20:13:37.650008  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:37.653181  946192 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0920 20:13:37.655110  946192 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0920 20:13:37.656843  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0920 20:13:37.656862  946192 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0920 20:13:37.656930  946192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-060703
	I0920 20:13:37.671852  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:37.710718  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:37.712895  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:37.723432  946192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/old-k8s-version-060703/id_rsa Username:docker}
	I0920 20:13:37.765102  946192 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:13:37.798373  946192 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-060703" to be "Ready" ...
	I0920 20:13:37.836226  946192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0920 20:13:37.836252  946192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0920 20:13:37.859687  946192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0920 20:13:37.859713  946192 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0920 20:13:37.898481  946192 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:13:37.898508  946192 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0920 20:13:37.902427  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:13:37.906690  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:13:37.913188  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0920 20:13:37.913229  946192 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0920 20:13:37.949220  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:13:37.961083  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0920 20:13:37.961111  946192 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0920 20:13:37.990934  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0920 20:13:37.990960  946192 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0920 20:13:38.015803  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0920 20:13:38.015830  946192 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0920 20:13:38.049135  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0920 20:13:38.049161  946192 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0920 20:13:38.081488  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0920 20:13:38.081521  946192 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0920 20:13:38.106171  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0920 20:13:38.106200  946192 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0920 20:13:38.134044  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0920 20:13:38.134070  946192 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0920 20:13:38.161442  946192 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 20:13:38.161469  946192 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	W0920 20:13:38.177844  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.177884  946192 retry.go:31] will retry after 304.861217ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:38.189861  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.189893  946192 retry.go:31] will retry after 144.28986ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.191927  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:38.209427  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.209464  946192 retry.go:31] will retry after 262.547987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:38.270607  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.270703  946192 retry.go:31] will retry after 256.110791ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.335321  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 20:13:38.411317  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.411353  946192 retry.go:31] will retry after 334.800419ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.472534  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:13:38.483853  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:13:38.527113  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:38.602059  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.602096  946192 retry.go:31] will retry after 431.675957ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:38.619224  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.619259  946192 retry.go:31] will retry after 286.788685ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:38.650703  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.650740  946192 retry.go:31] will retry after 393.281935ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.746932  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 20:13:38.819747  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.819784  946192 retry.go:31] will retry after 427.937952ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.906985  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 20:13:38.986742  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:38.986776  946192 retry.go:31] will retry after 404.113048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.034978  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:13:39.044491  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:39.136253  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.136292  946192 retry.go:31] will retry after 833.552652ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:39.163008  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.163045  946192 retry.go:31] will retry after 319.07265ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.248234  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 20:13:39.326577  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.326613  946192 retry.go:31] will retry after 1.161799781s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.391736  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 20:13:39.480171  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.480207  946192 retry.go:31] will retry after 579.211474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.482495  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:39.568162  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.568247  946192 retry.go:31] will retry after 518.611931ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:39.798946  946192 node_ready.go:53] error getting node "old-k8s-version-060703": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-060703": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 20:13:39.970691  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 20:13:40.056667  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.056751  946192 retry.go:31] will retry after 452.486418ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.059794  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0920 20:13:40.087427  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:40.149033  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.149078  946192 retry.go:31] will retry after 1.053057411s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:40.185926  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.185962  946192 retry.go:31] will retry after 1.553653722s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.488878  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:13:40.510294  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 20:13:40.602227  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.602266  946192 retry.go:31] will retry after 997.840761ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0920 20:13:40.623194  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:40.623230  946192 retry.go:31] will retry after 893.437255ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.202373  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 20:13:41.280223  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.280261  946192 retry.go:31] will retry after 1.071082151s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.517332  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 20:13:41.589124  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.589161  946192 retry.go:31] will retry after 2.809799635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.601349  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 20:13:41.673760  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.673795  946192 retry.go:31] will retry after 2.202559828s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.740057  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 20:13:41.799787  946192 node_ready.go:53] error getting node "old-k8s-version-060703": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-060703": dial tcp 192.168.76.2:8443: connect: connection refused
	W0920 20:13:41.809901  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:41.809932  946192 retry.go:31] will retry after 2.204710638s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:42.352087  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 20:13:42.434254  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:42.434288  946192 retry.go:31] will retry after 4.182690488s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:43.877389  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0920 20:13:43.952176  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:43.952209  946192 retry.go:31] will retry after 2.325859797s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:44.015442  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0920 20:13:44.095472  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:44.095504  946192 retry.go:31] will retry after 3.725656698s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:44.299046  946192 node_ready.go:53] error getting node "old-k8s-version-060703": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-060703": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 20:13:44.399154  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0920 20:13:44.479899  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:44.479933  946192 retry.go:31] will retry after 3.510268699s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:46.278534  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:13:46.299790  946192 node_ready.go:53] error getting node "old-k8s-version-060703": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-060703": dial tcp 192.168.76.2:8443: connect: connection refused
	I0920 20:13:46.618076  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0920 20:13:46.631729  946192 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:46.631758  946192 retry.go:31] will retry after 5.077499594s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0920 20:13:47.822197  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0920 20:13:47.990736  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0920 20:13:51.710197  946192 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0920 20:13:53.946395  946192 node_ready.go:49] node "old-k8s-version-060703" has status "Ready":"True"
	I0920 20:13:53.946419  946192 node_ready.go:38] duration metric: took 16.148006975s for node "old-k8s-version-060703" to be "Ready" ...
	I0920 20:13:53.946431  946192 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:13:54.110886  946192 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-5cx2l" in "kube-system" namespace to be "Ready" ...
	I0920 20:13:54.237918  946192 pod_ready.go:93] pod "coredns-74ff55c5b-5cx2l" in "kube-system" namespace has status "Ready":"True"
	I0920 20:13:54.237997  946192 pod_ready.go:82] duration metric: took 127.026234ms for pod "coredns-74ff55c5b-5cx2l" in "kube-system" namespace to be "Ready" ...
	I0920 20:13:54.238025  946192 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:13:54.660887  946192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.042773058s)
	I0920 20:13:55.051135  946192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.228893363s)
	I0920 20:13:55.051435  946192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.060668381s)
	I0920 20:13:55.051554  946192 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-060703"
	I0920 20:13:55.051517  946192 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.341286238s)
	I0920 20:13:55.053214  946192 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-060703 addons enable metrics-server
	
	I0920 20:13:55.055034  946192 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I0920 20:13:55.057114  946192 addons.go:510] duration metric: took 17.496374499s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I0920 20:13:56.244203  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:13:58.244515  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:00.284172  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:02.745593  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:05.245904  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:07.744150  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:10.244289  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:12.245196  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:14.745444  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:16.757340  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:19.249120  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:21.748978  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:24.244958  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:26.251530  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:28.744169  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:30.745624  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:32.745931  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:35.246224  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:37.745825  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:40.249541  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:42.745068  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:45.248839  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:47.744372  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:50.245401  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:52.744653  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:55.244622  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:57.744243  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:14:59.745288  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:02.245272  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:04.752411  946192 pod_ready.go:103] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:06.744210  946192 pod_ready.go:93] pod "etcd-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"True"
	I0920 20:15:06.744235  946192 pod_ready.go:82] duration metric: took 1m12.506189146s for pod "etcd-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:06.744251  946192 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:06.749908  946192 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"True"
	I0920 20:15:06.749934  946192 pod_ready.go:82] duration metric: took 5.675831ms for pod "kube-apiserver-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:06.749951  946192 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:08.757243  946192 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:10.757349  946192 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"True"
	I0920 20:15:10.757375  946192 pod_ready.go:82] duration metric: took 4.007416127s for pod "kube-controller-manager-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:10.757390  946192 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-vnktx" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:10.763010  946192 pod_ready.go:93] pod "kube-proxy-vnktx" in "kube-system" namespace has status "Ready":"True"
	I0920 20:15:10.763079  946192 pod_ready.go:82] duration metric: took 5.680024ms for pod "kube-proxy-vnktx" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:10.763099  946192 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:12.769407  946192 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:15.270234  946192 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:17.769932  946192 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-060703" in "kube-system" namespace has status "Ready":"True"
	I0920 20:15:17.769959  946192 pod_ready.go:82] duration metric: took 7.006850477s for pod "kube-scheduler-old-k8s-version-060703" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:17.769971  946192 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace to be "Ready" ...
	I0920 20:15:19.786042  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:22.276350  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:24.277165  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:26.277876  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:28.775910  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:31.276452  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:33.276999  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:35.277110  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:37.277834  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:39.777103  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:41.779402  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:44.275440  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:46.276398  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:48.776977  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:50.779430  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:53.277006  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:55.780329  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:15:58.276215  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:00.354636  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:02.777323  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:05.277842  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:07.778442  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:09.778556  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:12.276797  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:14.776991  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:16.781669  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:19.276554  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:21.276893  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:23.277029  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:25.776603  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:27.777990  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:29.782121  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:32.276495  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:34.776416  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:36.777016  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:39.275847  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:41.277349  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:43.278422  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:45.392131  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:47.777933  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:49.778287  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:52.276772  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:54.776797  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:57.277267  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:16:59.778913  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:01.785158  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:04.306662  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:06.777222  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:09.279616  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:11.281307  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:13.778182  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:15.778989  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:18.276826  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:20.776552  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:22.776869  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:24.777515  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:27.276728  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:29.779883  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:32.277114  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:34.775951  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:36.777276  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:39.276373  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:41.276761  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:43.276918  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:45.293018  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:47.777074  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:49.780967  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:51.784160  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:54.282719  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:56.777230  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:17:59.275833  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:01.277322  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:03.777320  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:06.276624  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:08.776585  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:11.277343  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:13.777840  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:15.779021  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:18.276217  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:20.276871  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:22.776879  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:25.276656  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:27.277095  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:29.778034  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:31.778546  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:34.276936  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:36.776625  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:39.276418  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:41.276750  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:43.282601  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:45.288553  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:47.778989  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:50.275880  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:52.276842  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:54.282534  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:56.776375  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:18:59.277824  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:01.778574  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:03.779154  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:06.278029  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:08.776740  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:11.276880  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:13.277362  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:15.277577  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:17.780101  946192 pod_ready.go:103] pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace has status "Ready":"False"
	I0920 20:19:17.780139  946192 pod_ready.go:82] duration metric: took 4m0.010158742s for pod "metrics-server-9975d5f86-rrgz6" in "kube-system" namespace to be "Ready" ...
	E0920 20:19:17.780151  946192 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0920 20:19:17.780165  946192 pod_ready.go:39] duration metric: took 5m23.833718241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0920 20:19:17.780191  946192 api_server.go:52] waiting for apiserver process to appear ...
	I0920 20:19:17.780232  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 20:19:17.780310  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 20:19:17.883487  946192 cri.go:89] found id: "9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:17.883511  946192 cri.go:89] found id: "56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:17.883517  946192 cri.go:89] found id: ""
	I0920 20:19:17.883524  946192 logs.go:276] 2 containers: [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b]
	I0920 20:19:17.883581  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:17.899469  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:17.909617  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 20:19:17.909693  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 20:19:17.974665  946192 cri.go:89] found id: "a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:17.974687  946192 cri.go:89] found id: "56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:17.974692  946192 cri.go:89] found id: ""
	I0920 20:19:17.974699  946192 logs.go:276] 2 containers: [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950]
	I0920 20:19:17.974758  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:17.982503  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:17.992643  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 20:19:17.992725  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 20:19:18.070773  946192 cri.go:89] found id: "5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:18.070795  946192 cri.go:89] found id: "fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:18.070800  946192 cri.go:89] found id: ""
	I0920 20:19:18.070807  946192 logs.go:276] 2 containers: [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5]
	I0920 20:19:18.070872  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.078254  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.083424  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 20:19:18.083502  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 20:19:18.156230  946192 cri.go:89] found id: "fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:18.156308  946192 cri.go:89] found id: "115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:18.156328  946192 cri.go:89] found id: ""
	I0920 20:19:18.156350  946192 logs.go:276] 2 containers: [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c]
	I0920 20:19:18.156442  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.160505  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.164502  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 20:19:18.164578  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 20:19:18.229223  946192 cri.go:89] found id: "9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:18.229244  946192 cri.go:89] found id: "7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:18.229250  946192 cri.go:89] found id: ""
	I0920 20:19:18.229257  946192 logs.go:276] 2 containers: [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca]
	I0920 20:19:18.229344  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.234474  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.239475  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 20:19:18.239560  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 20:19:18.295962  946192 cri.go:89] found id: "2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:18.295982  946192 cri.go:89] found id: "9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:18.295987  946192 cri.go:89] found id: ""
	I0920 20:19:18.295994  946192 logs.go:276] 2 containers: [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5]
	I0920 20:19:18.296058  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.300945  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.305616  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 20:19:18.305719  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 20:19:18.353963  946192 cri.go:89] found id: "4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:18.353984  946192 cri.go:89] found id: "dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:18.353989  946192 cri.go:89] found id: ""
	I0920 20:19:18.353996  946192 logs.go:276] 2 containers: [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c]
	I0920 20:19:18.354052  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.358241  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.361808  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 20:19:18.361881  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 20:19:18.416200  946192 cri.go:89] found id: "cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:18.416222  946192 cri.go:89] found id: ""
	I0920 20:19:18.416231  946192 logs.go:276] 1 containers: [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5]
	I0920 20:19:18.416307  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.422431  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 20:19:18.422508  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 20:19:18.484776  946192 cri.go:89] found id: "9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:18.484802  946192 cri.go:89] found id: "81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:18.484808  946192 cri.go:89] found id: ""
	I0920 20:19:18.484816  946192 logs.go:276] 2 containers: [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524]
	I0920 20:19:18.484884  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.489299  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:18.493490  946192 logs.go:123] Gathering logs for etcd [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487] ...
	I0920 20:19:18.493523  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:18.547694  946192 logs.go:123] Gathering logs for coredns [fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5] ...
	I0920 20:19:18.547768  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:18.599052  946192 logs.go:123] Gathering logs for kube-controller-manager [9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5] ...
	I0920 20:19:18.599084  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:18.687420  946192 logs.go:123] Gathering logs for containerd ...
	I0920 20:19:18.687503  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 20:19:18.762221  946192 logs.go:123] Gathering logs for kindnet [dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c] ...
	I0920 20:19:18.762319  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:18.813764  946192 logs.go:123] Gathering logs for kubelet ...
	I0920 20:19:18.813837  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 20:19:18.873123  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.782974     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.873387  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783820     659 reflector.go:138] object-"kube-system"/"coredns-token-skwkx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-skwkx" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.873621  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783953     659 reflector.go:138] object-"kube-system"/"kindnet-token-wtddc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-wtddc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.878363  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.840781     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-zwdqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zwdqw" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.878694  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853652     659 reflector.go:138] object-"kube-system"/"metrics-server-token-69pvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-69pvz" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.879023  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853659     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.879269  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853731     659 reflector.go:138] object-"default"/"default-token-vs6wc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vs6wc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.879545  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853745     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xwg86": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xwg86" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.889186  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.349137     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:18.890627  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.662052     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.893817  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:10 old-k8s-version-060703 kubelet[659]: E0920 20:14:10.543030     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:18.894391  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:11 old-k8s-version-060703 kubelet[659]: E0920 20:14:11.531667     659 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-2llkj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-2llkj" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:18.898150  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:19 old-k8s-version-060703 kubelet[659]: E0920 20:14:19.798449     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.899209  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:20 old-k8s-version-060703 kubelet[659]: E0920 20:14:20.798130     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.899425  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:21 old-k8s-version-060703 kubelet[659]: E0920 20:14:21.540669     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.900195  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:26 old-k8s-version-060703 kubelet[659]: E0920 20:14:26.449420     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.900690  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:28 old-k8s-version-060703 kubelet[659]: E0920 20:14:28.843061     659 pod_workers.go:191] Error syncing pod 5d9634f3-9aae-4540-8972-26fbe5a73fc5 ("storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"
	W0920 20:19:18.903586  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:32 old-k8s-version-060703 kubelet[659]: E0920 20:14:32.543542     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:18.904843  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:39 old-k8s-version-060703 kubelet[659]: E0920 20:14:39.886487     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.905530  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.448497     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.905764  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.535215     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.906491  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:01 old-k8s-version-060703 kubelet[659]: E0920 20:15:01.946516     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.906699  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:02 old-k8s-version-060703 kubelet[659]: E0920 20:15:02.538454     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.907034  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:06 old-k8s-version-060703 kubelet[659]: E0920 20:15:06.448416     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.909573  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:13 old-k8s-version-060703 kubelet[659]: E0920 20:15:13.548809     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:18.909940  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:17 old-k8s-version-060703 kubelet[659]: E0920 20:15:17.538654     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.910150  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:24 old-k8s-version-060703 kubelet[659]: E0920 20:15:24.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.910533  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:29 old-k8s-version-060703 kubelet[659]: E0920 20:15:29.535186     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.910740  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:37 old-k8s-version-060703 kubelet[659]: E0920 20:15:37.537356     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.911366  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:44 old-k8s-version-060703 kubelet[659]: E0920 20:15:44.079939     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.911894  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:46 old-k8s-version-060703 kubelet[659]: E0920 20:15:46.448892     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.912100  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:51 old-k8s-version-060703 kubelet[659]: E0920 20:15:51.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.912586  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:58 old-k8s-version-060703 kubelet[659]: E0920 20:15:58.535135     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.912789  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:03 old-k8s-version-060703 kubelet[659]: E0920 20:16:03.534859     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.913238  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:12 old-k8s-version-060703 kubelet[659]: E0920 20:16:12.534379     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.913453  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:18 old-k8s-version-060703 kubelet[659]: E0920 20:16:18.534761     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.913897  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:25 old-k8s-version-060703 kubelet[659]: E0920 20:16:25.535349     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.914244  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:33 old-k8s-version-060703 kubelet[659]: E0920 20:16:33.534743     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.914620  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:37 old-k8s-version-060703 kubelet[659]: E0920 20:16:37.534972     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.917530  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:45 old-k8s-version-060703 kubelet[659]: E0920 20:16:45.559442     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:18.917889  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:51 old-k8s-version-060703 kubelet[659]: E0920 20:16:51.535270     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.918088  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:00 old-k8s-version-060703 kubelet[659]: E0920 20:17:00.536632     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.919318  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:05 old-k8s-version-060703 kubelet[659]: E0920 20:17:05.335746     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.919671  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:06 old-k8s-version-060703 kubelet[659]: E0920 20:17:06.460448     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.919867  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:11 old-k8s-version-060703 kubelet[659]: E0920 20:17:11.534880     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.920209  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:20 old-k8s-version-060703 kubelet[659]: E0920 20:17:20.534257     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.920401  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:25 old-k8s-version-060703 kubelet[659]: E0920 20:17:25.535347     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.920745  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:35 old-k8s-version-060703 kubelet[659]: E0920 20:17:35.535192     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.920947  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:40 old-k8s-version-060703 kubelet[659]: E0920 20:17:40.535020     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.921838  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:50 old-k8s-version-060703 kubelet[659]: E0920 20:17:50.534292     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.922051  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:52 old-k8s-version-060703 kubelet[659]: E0920 20:17:52.534629     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.922447  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:04 old-k8s-version-060703 kubelet[659]: E0920 20:18:04.534361     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.922649  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:07 old-k8s-version-060703 kubelet[659]: E0920 20:18:07.537831     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.922994  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:15 old-k8s-version-060703 kubelet[659]: E0920 20:18:15.534869     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.923194  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:18 old-k8s-version-060703 kubelet[659]: E0920 20:18:18.534940     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.923539  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:28 old-k8s-version-060703 kubelet[659]: E0920 20:18:28.534266     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.923752  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:33 old-k8s-version-060703 kubelet[659]: E0920 20:18:33.534834     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.924114  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:42 old-k8s-version-060703 kubelet[659]: E0920 20:18:42.534713     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.924307  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.924659  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.924849  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:18.925214  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:18.925428  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 20:19:18.925441  946192 logs.go:123] Gathering logs for dmesg ...
	I0920 20:19:18.925455  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 20:19:18.944891  946192 logs.go:123] Gathering logs for describe nodes ...
	I0920 20:19:18.944923  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 20:19:19.201972  946192 logs.go:123] Gathering logs for kube-apiserver [56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b] ...
	I0920 20:19:19.202012  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:19.302535  946192 logs.go:123] Gathering logs for kube-scheduler [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd] ...
	I0920 20:19:19.302620  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:19.357429  946192 logs.go:123] Gathering logs for kube-scheduler [115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c] ...
	I0920 20:19:19.357463  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:19.440074  946192 logs.go:123] Gathering logs for kube-proxy [7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca] ...
	I0920 20:19:19.440114  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:19.506646  946192 logs.go:123] Gathering logs for storage-provisioner [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e] ...
	I0920 20:19:19.506676  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:19.559191  946192 logs.go:123] Gathering logs for coredns [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5] ...
	I0920 20:19:19.559217  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:19.634976  946192 logs.go:123] Gathering logs for kubernetes-dashboard [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5] ...
	I0920 20:19:19.635004  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:19.696989  946192 logs.go:123] Gathering logs for storage-provisioner [81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524] ...
	I0920 20:19:19.697017  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:19.762760  946192 logs.go:123] Gathering logs for container status ...
	I0920 20:19:19.762785  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 20:19:19.841410  946192 logs.go:123] Gathering logs for kube-apiserver [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf] ...
	I0920 20:19:19.841486  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:19.924404  946192 logs.go:123] Gathering logs for etcd [56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950] ...
	I0920 20:19:19.924448  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:19.963912  946192 logs.go:123] Gathering logs for kube-proxy [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06] ...
	I0920 20:19:19.963944  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:20.027494  946192 logs.go:123] Gathering logs for kube-controller-manager [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b] ...
	I0920 20:19:20.027525  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:20.131640  946192 logs.go:123] Gathering logs for kindnet [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3] ...
	I0920 20:19:20.131675  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:20.188519  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:20.188550  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 20:19:20.188643  946192 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 20:19:20.188660  946192 out.go:270]   Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:20.188667  946192 out.go:270]   Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	  Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:20.188802  946192 out.go:270]   Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:20.188819  946192 out.go:270]   Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	  Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:20.188838  946192 out.go:270]   Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 20:19:20.188851  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:20.188858  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:30.189897  946192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:19:30.220003  946192 api_server.go:72] duration metric: took 5m52.659589037s to wait for apiserver process to appear ...
	I0920 20:19:30.220032  946192 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:19:30.220083  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 20:19:30.220146  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 20:19:30.266759  946192 cri.go:89] found id: "9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:30.266784  946192 cri.go:89] found id: "56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:30.266790  946192 cri.go:89] found id: ""
	I0920 20:19:30.266798  946192 logs.go:276] 2 containers: [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b]
	I0920 20:19:30.266886  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.271042  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.274878  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 20:19:30.274961  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 20:19:30.338230  946192 cri.go:89] found id: "a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:30.338253  946192 cri.go:89] found id: "56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:30.338260  946192 cri.go:89] found id: ""
	I0920 20:19:30.338268  946192 logs.go:276] 2 containers: [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950]
	I0920 20:19:30.338375  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.344777  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.348450  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 20:19:30.348528  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 20:19:30.411534  946192 cri.go:89] found id: "5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:30.411557  946192 cri.go:89] found id: "fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:30.411562  946192 cri.go:89] found id: ""
	I0920 20:19:30.411570  946192 logs.go:276] 2 containers: [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5]
	I0920 20:19:30.411623  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.416017  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.420499  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 20:19:30.420570  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 20:19:30.515762  946192 cri.go:89] found id: "fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:30.515782  946192 cri.go:89] found id: "115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:30.515787  946192 cri.go:89] found id: ""
	I0920 20:19:30.515795  946192 logs.go:276] 2 containers: [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c]
	I0920 20:19:30.515851  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.520267  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.524379  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 20:19:30.524450  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 20:19:30.579639  946192 cri.go:89] found id: "9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:30.579715  946192 cri.go:89] found id: "7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:30.579723  946192 cri.go:89] found id: ""
	I0920 20:19:30.579731  946192 logs.go:276] 2 containers: [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca]
	I0920 20:19:30.579815  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.584929  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.590591  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 20:19:30.590662  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 20:19:30.660047  946192 cri.go:89] found id: "2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:30.660070  946192 cri.go:89] found id: "9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:30.660075  946192 cri.go:89] found id: ""
	I0920 20:19:30.660083  946192 logs.go:276] 2 containers: [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5]
	I0920 20:19:30.660139  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.664060  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.667560  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 20:19:30.667629  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 20:19:30.722891  946192 cri.go:89] found id: "4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:30.722911  946192 cri.go:89] found id: "dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:30.722916  946192 cri.go:89] found id: ""
	I0920 20:19:30.722924  946192 logs.go:276] 2 containers: [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c]
	I0920 20:19:30.722982  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.728676  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.733341  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 20:19:30.733427  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 20:19:30.805448  946192 cri.go:89] found id: "9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:30.805470  946192 cri.go:89] found id: "81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:30.805475  946192 cri.go:89] found id: ""
	I0920 20:19:30.805483  946192 logs.go:276] 2 containers: [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524]
	I0920 20:19:30.805542  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.862141  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.888723  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 20:19:30.888800  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 20:19:31.071414  946192 cri.go:89] found id: "cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:31.071437  946192 cri.go:89] found id: ""
	I0920 20:19:31.071445  946192 logs.go:276] 1 containers: [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5]
	I0920 20:19:31.071502  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:31.083729  946192 logs.go:123] Gathering logs for kube-apiserver [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf] ...
	I0920 20:19:31.083754  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:31.272258  946192 logs.go:123] Gathering logs for coredns [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5] ...
	I0920 20:19:31.272477  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:31.381221  946192 logs.go:123] Gathering logs for kube-scheduler [115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c] ...
	I0920 20:19:31.381249  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:31.515439  946192 logs.go:123] Gathering logs for kube-proxy [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06] ...
	I0920 20:19:31.515609  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:31.629196  946192 logs.go:123] Gathering logs for storage-provisioner [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e] ...
	I0920 20:19:31.629221  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:31.698666  946192 logs.go:123] Gathering logs for storage-provisioner [81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524] ...
	I0920 20:19:31.698752  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:31.779074  946192 logs.go:123] Gathering logs for containerd ...
	I0920 20:19:31.779099  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 20:19:31.870732  946192 logs.go:123] Gathering logs for kubelet ...
	I0920 20:19:31.870826  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 20:19:31.934948  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.782974     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.935285  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783820     659 reflector.go:138] object-"kube-system"/"coredns-token-skwkx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-skwkx" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.935543  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783953     659 reflector.go:138] object-"kube-system"/"kindnet-token-wtddc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-wtddc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939286  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.840781     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-zwdqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zwdqw" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939544  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853652     659 reflector.go:138] object-"kube-system"/"metrics-server-token-69pvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-69pvz" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939783  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853659     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.940025  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853731     659 reflector.go:138] object-"default"/"default-token-vs6wc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vs6wc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.940289  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853745     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xwg86": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xwg86" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.948541  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.349137     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.951224  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.662052     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.954368  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:10 old-k8s-version-060703 kubelet[659]: E0920 20:14:10.543030     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.954828  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:11 old-k8s-version-060703 kubelet[659]: E0920 20:14:11.531667     659 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-2llkj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-2llkj" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.958101  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:19 old-k8s-version-060703 kubelet[659]: E0920 20:14:19.798449     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.958686  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:20 old-k8s-version-060703 kubelet[659]: E0920 20:14:20.798130     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.958905  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:21 old-k8s-version-060703 kubelet[659]: E0920 20:14:21.540669     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.959602  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:26 old-k8s-version-060703 kubelet[659]: E0920 20:14:26.449420     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.960135  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:28 old-k8s-version-060703 kubelet[659]: E0920 20:14:28.843061     659 pod_workers.go:191] Error syncing pod 5d9634f3-9aae-4540-8972-26fbe5a73fc5 ("storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"
	W0920 20:19:31.962745  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:32 old-k8s-version-060703 kubelet[659]: E0920 20:14:32.543542     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.963712  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:39 old-k8s-version-060703 kubelet[659]: E0920 20:14:39.886487     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.964295  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.448497     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.964604  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.535215     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.965271  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:01 old-k8s-version-060703 kubelet[659]: E0920 20:15:01.946516     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.965617  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:02 old-k8s-version-060703 kubelet[659]: E0920 20:15:02.538454     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.965982  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:06 old-k8s-version-060703 kubelet[659]: E0920 20:15:06.448416     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.971258  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:13 old-k8s-version-060703 kubelet[659]: E0920 20:15:13.548809     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.971606  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:17 old-k8s-version-060703 kubelet[659]: E0920 20:15:17.538654     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.971792  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:24 old-k8s-version-060703 kubelet[659]: E0920 20:15:24.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.972133  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:29 old-k8s-version-060703 kubelet[659]: E0920 20:15:29.535186     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.972317  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:37 old-k8s-version-060703 kubelet[659]: E0920 20:15:37.537356     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.972905  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:44 old-k8s-version-060703 kubelet[659]: E0920 20:15:44.079939     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973234  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:46 old-k8s-version-060703 kubelet[659]: E0920 20:15:46.448892     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973417  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:51 old-k8s-version-060703 kubelet[659]: E0920 20:15:51.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.973746  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:58 old-k8s-version-060703 kubelet[659]: E0920 20:15:58.535135     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973930  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:03 old-k8s-version-060703 kubelet[659]: E0920 20:16:03.534859     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.974256  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:12 old-k8s-version-060703 kubelet[659]: E0920 20:16:12.534379     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.976050  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:18 old-k8s-version-060703 kubelet[659]: E0920 20:16:18.534761     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.976435  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:25 old-k8s-version-060703 kubelet[659]: E0920 20:16:25.535349     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.976649  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:33 old-k8s-version-060703 kubelet[659]: E0920 20:16:33.534743     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.977007  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:37 old-k8s-version-060703 kubelet[659]: E0920 20:16:37.534972     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.980399  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:45 old-k8s-version-060703 kubelet[659]: E0920 20:16:45.559442     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.980822  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:51 old-k8s-version-060703 kubelet[659]: E0920 20:16:51.535270     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.981038  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:00 old-k8s-version-060703 kubelet[659]: E0920 20:17:00.536632     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.981668  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:05 old-k8s-version-060703 kubelet[659]: E0920 20:17:05.335746     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982024  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:06 old-k8s-version-060703 kubelet[659]: E0920 20:17:06.460448     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982239  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:11 old-k8s-version-060703 kubelet[659]: E0920 20:17:11.534880     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.982610  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:20 old-k8s-version-060703 kubelet[659]: E0920 20:17:20.534257     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982823  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:25 old-k8s-version-060703 kubelet[659]: E0920 20:17:25.535347     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.983180  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:35 old-k8s-version-060703 kubelet[659]: E0920 20:17:35.535192     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.983395  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:40 old-k8s-version-060703 kubelet[659]: E0920 20:17:40.535020     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.983755  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:50 old-k8s-version-060703 kubelet[659]: E0920 20:17:50.534292     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.983975  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:52 old-k8s-version-060703 kubelet[659]: E0920 20:17:52.534629     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.984346  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:04 old-k8s-version-060703 kubelet[659]: E0920 20:18:04.534361     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.984567  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:07 old-k8s-version-060703 kubelet[659]: E0920 20:18:07.537831     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.984930  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:15 old-k8s-version-060703 kubelet[659]: E0920 20:18:15.534869     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.985145  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:18 old-k8s-version-060703 kubelet[659]: E0920 20:18:18.534940     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.985506  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:28 old-k8s-version-060703 kubelet[659]: E0920 20:18:28.534266     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.985722  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:33 old-k8s-version-060703 kubelet[659]: E0920 20:18:33.534834     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.986084  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:42 old-k8s-version-060703 kubelet[659]: E0920 20:18:42.534713     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.988193  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.988591  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.988808  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.989172  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.989395  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.989757  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.989974  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.990351  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	I0920 20:19:31.990378  946192 logs.go:123] Gathering logs for etcd [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487] ...
	I0920 20:19:31.990410  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:32.050934  946192 logs.go:123] Gathering logs for etcd [56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950] ...
	I0920 20:19:32.051019  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:32.119302  946192 logs.go:123] Gathering logs for kubernetes-dashboard [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5] ...
	I0920 20:19:32.119395  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:32.197875  946192 logs.go:123] Gathering logs for coredns [fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5] ...
	I0920 20:19:32.197949  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:32.259354  946192 logs.go:123] Gathering logs for kube-scheduler [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd] ...
	I0920 20:19:32.259440  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:32.313220  946192 logs.go:123] Gathering logs for kube-controller-manager [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b] ...
	I0920 20:19:32.313299  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:32.399048  946192 logs.go:123] Gathering logs for kube-controller-manager [9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5] ...
	I0920 20:19:32.399127  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:32.498371  946192 logs.go:123] Gathering logs for kindnet [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3] ...
	I0920 20:19:32.498420  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:32.573011  946192 logs.go:123] Gathering logs for kindnet [dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c] ...
	I0920 20:19:32.573049  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:32.631184  946192 logs.go:123] Gathering logs for dmesg ...
	I0920 20:19:32.631223  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 20:19:32.648374  946192 logs.go:123] Gathering logs for describe nodes ...
	I0920 20:19:32.648408  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 20:19:32.812294  946192 logs.go:123] Gathering logs for kube-apiserver [56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b] ...
	I0920 20:19:32.812337  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:32.869625  946192 logs.go:123] Gathering logs for kube-proxy [7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca] ...
	I0920 20:19:32.869662  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:32.915764  946192 logs.go:123] Gathering logs for container status ...
	I0920 20:19:32.915795  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 20:19:32.970823  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:32.970852  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 20:19:32.970918  946192 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0920 20:19:32.970932  946192 out.go:270]   Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	  Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:32.970943  946192 out.go:270]   Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:32.970960  946192 out.go:270]   Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	  Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:32.970967  946192 out.go:270]   Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:32.970973  946192 out.go:270]   Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	  Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	I0920 20:19:32.970999  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:32.971007  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:42.971525  946192 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0920 20:19:42.984094  946192 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0920 20:19:42.986656  946192 out.go:201] 
	W0920 20:19:42.989157  946192 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 20:19:42.989263  946192 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 20:19:42.989340  946192 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 20:19:42.989378  946192 out.go:270] * 
	* 
	W0920 20:19:42.990441  946192 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 20:19:42.992117  946192 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-060703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-060703
helpers_test.go:235: (dbg) docker inspect old-k8s-version-060703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff",
	        "Created": "2024-09-20T20:10:26.334964031Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 946412,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-20T20:13:30.628755883Z",
	            "FinishedAt": "2024-09-20T20:13:29.379320895Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff/hosts",
	        "LogPath": "/var/lib/docker/containers/ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff/ebf8270a7e83efc1540df541410773b2defc356047e54d7ff47598ea8d3479ff-json.log",
	        "Name": "/old-k8s-version-060703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-060703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-060703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/30cbd9b6f4a46a51f2b951333ac493ffa442d598abe8edb1f8edf90d8a0fd218-init/diff:/var/lib/docker/overlay2/0eebc2dd792544f9be347ae96aac5eeb2f1e9299f1fe8e5c7ced4da8d5f2fc78/diff",
	                "MergedDir": "/var/lib/docker/overlay2/30cbd9b6f4a46a51f2b951333ac493ffa442d598abe8edb1f8edf90d8a0fd218/merged",
	                "UpperDir": "/var/lib/docker/overlay2/30cbd9b6f4a46a51f2b951333ac493ffa442d598abe8edb1f8edf90d8a0fd218/diff",
	                "WorkDir": "/var/lib/docker/overlay2/30cbd9b6f4a46a51f2b951333ac493ffa442d598abe8edb1f8edf90d8a0fd218/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-060703",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-060703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-060703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-060703",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-060703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c32f4e805f9673bce89d4c22230404722b585c98a88e3c28e8c8faf08e143a68",
	            "SandboxKey": "/var/run/docker/netns/c32f4e805f96",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-060703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "51e262ed0a32e65492e517f7128051fcb86ac1d963260b80a6f1822b8a93cc9f",
	                    "EndpointID": "fcceae4db074418720b2b4d848b4b76ad54b94037aa07c35853471948af89520",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-060703",
	                        "ebf8270a7e83"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-060703 -n old-k8s-version-060703
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-060703 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-060703 logs -n 25: (3.01973553s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-284358                              | cert-expiration-284358       | jenkins | v1.34.0 | 20 Sep 24 20:09 UTC | 20 Sep 24 20:09 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | force-systemd-env-169165                               | force-systemd-env-169165     | jenkins | v1.34.0 | 20 Sep 24 20:09 UTC | 20 Sep 24 20:09 UTC |
	|         | ssh cat                                                |                              |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| delete  | -p force-systemd-env-169165                            | force-systemd-env-169165     | jenkins | v1.34.0 | 20 Sep 24 20:09 UTC | 20 Sep 24 20:09 UTC |
	| start   | -p cert-options-485064                                 | cert-options-485064          | jenkins | v1.34.0 | 20 Sep 24 20:09 UTC | 20 Sep 24 20:10 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                              |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                              |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                              |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                              |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| ssh     | cert-options-485064 ssh                                | cert-options-485064          | jenkins | v1.34.0 | 20 Sep 24 20:10 UTC | 20 Sep 24 20:10 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-485064 -- sudo                         | cert-options-485064          | jenkins | v1.34.0 | 20 Sep 24 20:10 UTC | 20 Sep 24 20:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-485064                                 | cert-options-485064          | jenkins | v1.34.0 | 20 Sep 24 20:10 UTC | 20 Sep 24 20:10 UTC |
	| start   | -p old-k8s-version-060703                              | old-k8s-version-060703       | jenkins | v1.34.0 | 20 Sep 24 20:10 UTC | 20 Sep 24 20:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-284358                              | cert-expiration-284358       | jenkins | v1.34.0 | 20 Sep 24 20:12 UTC | 20 Sep 24 20:12 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-284358                              | cert-expiration-284358       | jenkins | v1.34.0 | 20 Sep 24 20:12 UTC | 20 Sep 24 20:12 UTC |
	| start   | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:12 UTC | 20 Sep 24 20:13 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-060703        | old-k8s-version-060703       | jenkins | v1.34.0 | 20 Sep 24 20:13 UTC | 20 Sep 24 20:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-060703                              | old-k8s-version-060703       | jenkins | v1.34.0 | 20 Sep 24 20:13 UTC | 20 Sep 24 20:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-060703             | old-k8s-version-060703       | jenkins | v1.34.0 | 20 Sep 24 20:13 UTC | 20 Sep 24 20:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-060703                              | old-k8s-version-060703       | jenkins | v1.34.0 | 20 Sep 24 20:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-268732  | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:14 UTC | 20 Sep 24 20:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:14 UTC | 20 Sep 24 20:14 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-268732       | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:14 UTC | 20 Sep 24 20:14 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:14 UTC | 20 Sep 24 20:19 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=docker                                        |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	| image   | default-k8s-diff-port-268732                           | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC | 20 Sep 24 20:19 UTC |
	|         | image list --format=json                               |                              |         |         |                     |                     |
	| pause   | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC | 20 Sep 24 20:19 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC | 20 Sep 24 20:19 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC | 20 Sep 24 20:19 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	| delete  | -p                                                     | default-k8s-diff-port-268732 | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC | 20 Sep 24 20:19 UTC |
	|         | default-k8s-diff-port-268732                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-975064                                  | embed-certs-975064           | jenkins | v1.34.0 | 20 Sep 24 20:19 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                              |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 20:19:24
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 20:19:24.644829  956612 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:19:24.644971  956612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:24.644982  956612 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:24.644986  956612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:24.645263  956612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 20:19:24.645731  956612 out.go:352] Setting JSON to false
	I0920 20:19:24.646915  956612 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":14516,"bootTime":1726849049,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 20:19:24.646992  956612 start.go:139] virtualization:  
	I0920 20:19:24.649644  956612 out.go:177] * [embed-certs-975064] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 20:19:24.652174  956612 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 20:19:24.652317  956612 notify.go:220] Checking for updates...
	I0920 20:19:24.655778  956612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:19:24.657785  956612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 20:19:24.659598  956612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 20:19:24.661193  956612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 20:19:24.662797  956612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:19:24.665005  956612 config.go:182] Loaded profile config "old-k8s-version-060703": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0920 20:19:24.665104  956612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:19:24.697231  956612 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:19:24.697372  956612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:19:24.751662  956612 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:19:24.74097368 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:19:24.751799  956612 docker.go:318] overlay module found
	I0920 20:19:24.753603  956612 out.go:177] * Using the docker driver based on user configuration
	I0920 20:19:24.755885  956612 start.go:297] selected driver: docker
	I0920 20:19:24.755914  956612 start.go:901] validating driver "docker" against <nil>
	I0920 20:19:24.755929  956612 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:19:24.756587  956612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:19:24.811658  956612 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:19:24.802117314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:19:24.811877  956612 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 20:19:24.812143  956612 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0920 20:19:24.814129  956612 out.go:177] * Using Docker driver with root privileges
	I0920 20:19:24.815952  956612 cni.go:84] Creating CNI manager for ""
	I0920 20:19:24.816020  956612 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 20:19:24.816035  956612 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 20:19:24.816145  956612 start.go:340] cluster config:
	{Name:embed-certs-975064 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-975064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:19:24.818247  956612 out.go:177] * Starting "embed-certs-975064" primary control-plane node in "embed-certs-975064" cluster
	I0920 20:19:24.820315  956612 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 20:19:24.822232  956612 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0920 20:19:19.924404  946192 logs.go:123] Gathering logs for etcd [56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950] ...
	I0920 20:19:19.924448  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:19.963912  946192 logs.go:123] Gathering logs for kube-proxy [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06] ...
	I0920 20:19:19.963944  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:20.027494  946192 logs.go:123] Gathering logs for kube-controller-manager [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b] ...
	I0920 20:19:20.027525  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:20.131640  946192 logs.go:123] Gathering logs for kindnet [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3] ...
	I0920 20:19:20.131675  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:20.188519  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:20.188550  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 20:19:20.188643  946192 out.go:270] X Problems detected in kubelet:
	W0920 20:19:20.188660  946192 out.go:270]   Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:20.188667  946192 out.go:270]   Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:20.188802  946192 out.go:270]   Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:20.188819  946192 out.go:270]   Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:20.188838  946192 out.go:270]   Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0920 20:19:20.188851  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:20.188858  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:24.823983  956612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 20:19:24.824076  956612 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 20:19:24.824098  956612 cache.go:56] Caching tarball of preloaded images
	I0920 20:19:24.824185  956612 preload.go:172] Found /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0920 20:19:24.824201  956612 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 20:19:24.824311  956612 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/config.json ...
	I0920 20:19:24.824333  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/config.json: {Name:mk52ebf6b085d5c797617d5bb989e868670dfc8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:24.824502  956612 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	W0920 20:19:24.845397  956612 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 is of wrong architecture
	I0920 20:19:24.845424  956612 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 20:19:24.845514  956612 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 20:19:24.845546  956612 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 20:19:24.845555  956612 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 20:19:24.845563  956612 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 20:19:24.845569  956612 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0920 20:19:24.968767  956612 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0920 20:19:24.968808  956612 cache.go:194] Successfully downloaded all kic artifacts
	I0920 20:19:24.968840  956612 start.go:360] acquireMachinesLock for embed-certs-975064: {Name:mk3cef09a94e9243d65f201ee0ab2b5cdb1bf460 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0920 20:19:24.969674  956612 start.go:364] duration metric: took 798.489µs to acquireMachinesLock for "embed-certs-975064"
	I0920 20:19:24.969714  956612 start.go:93] Provisioning new machine with config: &{Name:embed-certs-975064 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-975064 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0920 20:19:24.969816  956612 start.go:125] createHost starting for "" (driver="docker")
	I0920 20:19:24.973695  956612 out.go:235] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0920 20:19:24.974124  956612 start.go:159] libmachine.API.Create for "embed-certs-975064" (driver="docker")
	I0920 20:19:24.974179  956612 client.go:168] LocalClient.Create starting
	I0920 20:19:24.974355  956612 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem
	I0920 20:19:24.974440  956612 main.go:141] libmachine: Decoding PEM data...
	I0920 20:19:24.974497  956612 main.go:141] libmachine: Parsing certificate...
	I0920 20:19:24.974608  956612 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem
	I0920 20:19:24.974660  956612 main.go:141] libmachine: Decoding PEM data...
	I0920 20:19:24.974678  956612 main.go:141] libmachine: Parsing certificate...
	I0920 20:19:24.975282  956612 cli_runner.go:164] Run: docker network inspect embed-certs-975064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0920 20:19:24.992174  956612 cli_runner.go:211] docker network inspect embed-certs-975064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0920 20:19:24.992262  956612 network_create.go:284] running [docker network inspect embed-certs-975064] to gather additional debugging logs...
	I0920 20:19:24.992284  956612 cli_runner.go:164] Run: docker network inspect embed-certs-975064
	W0920 20:19:25.017428  956612 cli_runner.go:211] docker network inspect embed-certs-975064 returned with exit code 1
	I0920 20:19:25.017499  956612 network_create.go:287] error running [docker network inspect embed-certs-975064]: docker network inspect embed-certs-975064: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-975064 not found
	I0920 20:19:25.017515  956612 network_create.go:289] output of [docker network inspect embed-certs-975064]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-975064 not found
	
	** /stderr **
	I0920 20:19:25.017638  956612 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 20:19:25.040499  956612 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed49c39a7360 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:dc:a0:2e} reservation:<nil>}
	I0920 20:19:25.041112  956612 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a08a3102b426 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:fe:79:ed:78} reservation:<nil>}
	I0920 20:19:25.041659  956612 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-71eb29326108 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:e6:5d:b6:6f} reservation:<nil>}
	I0920 20:19:25.042151  956612 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-51e262ed0a32 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:b6:d1:b3:94} reservation:<nil>}
	I0920 20:19:25.042764  956612 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001885420}
	I0920 20:19:25.042793  956612 network_create.go:124] attempt to create docker network embed-certs-975064 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0920 20:19:25.042865  956612 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-975064 embed-certs-975064
	I0920 20:19:25.117739  956612 network_create.go:108] docker network embed-certs-975064 192.168.85.0/24 created
	I0920 20:19:25.117771  956612 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-975064" container
	I0920 20:19:25.117859  956612 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0920 20:19:25.141116  956612 cli_runner.go:164] Run: docker volume create embed-certs-975064 --label name.minikube.sigs.k8s.io=embed-certs-975064 --label created_by.minikube.sigs.k8s.io=true
	I0920 20:19:25.158684  956612 oci.go:103] Successfully created a docker volume embed-certs-975064
	I0920 20:19:25.158772  956612 cli_runner.go:164] Run: docker run --rm --name embed-certs-975064-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-975064 --entrypoint /usr/bin/test -v embed-certs-975064:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0920 20:19:25.813483  956612 oci.go:107] Successfully prepared a docker volume embed-certs-975064
	I0920 20:19:25.813539  956612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 20:19:25.813562  956612 kic.go:194] Starting extracting preloaded images to volume ...
	I0920 20:19:25.813642  956612 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-975064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0920 20:19:30.301395  956612 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-975064:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.487704391s)
	I0920 20:19:30.301435  956612 kic.go:203] duration metric: took 4.487867984s to extract preloaded images to volume ...
	W0920 20:19:30.301595  956612 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0920 20:19:30.301726  956612 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0920 20:19:30.379950  956612 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-975064 --name embed-certs-975064 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-975064 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-975064 --network embed-certs-975064 --ip 192.168.85.2 --volume embed-certs-975064:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0920 20:19:30.782630  956612 cli_runner.go:164] Run: docker container inspect embed-certs-975064 --format={{.State.Running}}
	I0920 20:19:30.810731  956612 cli_runner.go:164] Run: docker container inspect embed-certs-975064 --format={{.State.Status}}
	I0920 20:19:30.842341  956612 cli_runner.go:164] Run: docker exec embed-certs-975064 stat /var/lib/dpkg/alternatives/iptables
	I0920 20:19:30.929638  956612 oci.go:144] the created container "embed-certs-975064" has a running status.
	I0920 20:19:30.929677  956612 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa...
	I0920 20:19:31.326956  956612 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0920 20:19:31.360130  956612 cli_runner.go:164] Run: docker container inspect embed-certs-975064 --format={{.State.Status}}
	I0920 20:19:31.389617  956612 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0920 20:19:31.389638  956612 kic_runner.go:114] Args: [docker exec --privileged embed-certs-975064 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0920 20:19:31.498648  956612 cli_runner.go:164] Run: docker container inspect embed-certs-975064 --format={{.State.Status}}
	I0920 20:19:31.532360  956612 machine.go:93] provisionDockerMachine start ...
	I0920 20:19:31.532460  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:31.569892  956612 main.go:141] libmachine: Using SSH client type: native
	I0920 20:19:31.570166  956612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0920 20:19:31.570181  956612 main.go:141] libmachine: About to run SSH command:
	hostname
	I0920 20:19:31.570928  956612 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49718->127.0.0.1:33443: read: connection reset by peer
	I0920 20:19:30.189897  946192 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 20:19:30.220003  946192 api_server.go:72] duration metric: took 5m52.659589037s to wait for apiserver process to appear ...
	I0920 20:19:30.220032  946192 api_server.go:88] waiting for apiserver healthz status ...
	I0920 20:19:30.220083  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0920 20:19:30.220146  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0920 20:19:30.266759  946192 cri.go:89] found id: "9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:30.266784  946192 cri.go:89] found id: "56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:30.266790  946192 cri.go:89] found id: ""
	I0920 20:19:30.266798  946192 logs.go:276] 2 containers: [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b]
	I0920 20:19:30.266886  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.271042  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.274878  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0920 20:19:30.274961  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0920 20:19:30.338230  946192 cri.go:89] found id: "a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:30.338253  946192 cri.go:89] found id: "56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:30.338260  946192 cri.go:89] found id: ""
	I0920 20:19:30.338268  946192 logs.go:276] 2 containers: [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950]
	I0920 20:19:30.338375  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.344777  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.348450  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0920 20:19:30.348528  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0920 20:19:30.411534  946192 cri.go:89] found id: "5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:30.411557  946192 cri.go:89] found id: "fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:30.411562  946192 cri.go:89] found id: ""
	I0920 20:19:30.411570  946192 logs.go:276] 2 containers: [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5]
	I0920 20:19:30.411623  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.416017  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.420499  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0920 20:19:30.420570  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0920 20:19:30.515762  946192 cri.go:89] found id: "fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:30.515782  946192 cri.go:89] found id: "115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:30.515787  946192 cri.go:89] found id: ""
	I0920 20:19:30.515795  946192 logs.go:276] 2 containers: [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c]
	I0920 20:19:30.515851  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.520267  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.524379  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0920 20:19:30.524450  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0920 20:19:30.579639  946192 cri.go:89] found id: "9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:30.579715  946192 cri.go:89] found id: "7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:30.579723  946192 cri.go:89] found id: ""
	I0920 20:19:30.579731  946192 logs.go:276] 2 containers: [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca]
	I0920 20:19:30.579815  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.584929  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.590591  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0920 20:19:30.590662  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0920 20:19:30.660047  946192 cri.go:89] found id: "2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:30.660070  946192 cri.go:89] found id: "9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:30.660075  946192 cri.go:89] found id: ""
	I0920 20:19:30.660083  946192 logs.go:276] 2 containers: [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5]
	I0920 20:19:30.660139  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.664060  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.667560  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0920 20:19:30.667629  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0920 20:19:30.722891  946192 cri.go:89] found id: "4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:30.722911  946192 cri.go:89] found id: "dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:30.722916  946192 cri.go:89] found id: ""
	I0920 20:19:30.722924  946192 logs.go:276] 2 containers: [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c]
	I0920 20:19:30.722982  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.728676  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.733341  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0920 20:19:30.733427  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0920 20:19:30.805448  946192 cri.go:89] found id: "9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:30.805470  946192 cri.go:89] found id: "81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:30.805475  946192 cri.go:89] found id: ""
	I0920 20:19:30.805483  946192 logs.go:276] 2 containers: [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524]
	I0920 20:19:30.805542  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.862141  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:30.888723  946192 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0920 20:19:30.888800  946192 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0920 20:19:31.071414  946192 cri.go:89] found id: "cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:31.071437  946192 cri.go:89] found id: ""
	I0920 20:19:31.071445  946192 logs.go:276] 1 containers: [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5]
	I0920 20:19:31.071502  946192 ssh_runner.go:195] Run: which crictl
	I0920 20:19:31.083729  946192 logs.go:123] Gathering logs for kube-apiserver [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf] ...
	I0920 20:19:31.083754  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf"
	I0920 20:19:31.272258  946192 logs.go:123] Gathering logs for coredns [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5] ...
	I0920 20:19:31.272477  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5"
	I0920 20:19:31.381221  946192 logs.go:123] Gathering logs for kube-scheduler [115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c] ...
	I0920 20:19:31.381249  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c"
	I0920 20:19:31.515439  946192 logs.go:123] Gathering logs for kube-proxy [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06] ...
	I0920 20:19:31.515609  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06"
	I0920 20:19:31.629196  946192 logs.go:123] Gathering logs for storage-provisioner [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e] ...
	I0920 20:19:31.629221  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e"
	I0920 20:19:31.698666  946192 logs.go:123] Gathering logs for storage-provisioner [81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524] ...
	I0920 20:19:31.698752  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524"
	I0920 20:19:31.779074  946192 logs.go:123] Gathering logs for containerd ...
	I0920 20:19:31.779099  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0920 20:19:31.870732  946192 logs.go:123] Gathering logs for kubelet ...
	I0920 20:19:31.870826  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0920 20:19:31.934948  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.782974     659 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.935285  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783820     659 reflector.go:138] object-"kube-system"/"coredns-token-skwkx": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-skwkx" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.935543  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.783953     659 reflector.go:138] object-"kube-system"/"kindnet-token-wtddc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-wtddc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939286  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.840781     659 reflector.go:138] object-"kube-system"/"kube-proxy-token-zwdqw": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-zwdqw" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939544  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853652     659 reflector.go:138] object-"kube-system"/"metrics-server-token-69pvz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-69pvz" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.939783  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853659     659 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.940025  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853731     659 reflector.go:138] object-"default"/"default-token-vs6wc": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-vs6wc" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.940289  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:53 old-k8s-version-060703 kubelet[659]: E0920 20:13:53.853745     659 reflector.go:138] object-"kube-system"/"storage-provisioner-token-xwg86": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-xwg86" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.948541  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.349137     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.951224  946192 logs.go:138] Found kubelet problem: Sep 20 20:13:56 old-k8s-version-060703 kubelet[659]: E0920 20:13:56.662052     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.954368  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:10 old-k8s-version-060703 kubelet[659]: E0920 20:14:10.543030     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.954828  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:11 old-k8s-version-060703 kubelet[659]: E0920 20:14:11.531667     659 reflector.go:138] object-"kubernetes-dashboard"/"kubernetes-dashboard-token-2llkj": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kubernetes-dashboard-token-2llkj" is forbidden: User "system:node:old-k8s-version-060703" cannot list resource "secrets" in API group "" in the namespace "kubernetes-dashboard": no relationship found between node 'old-k8s-version-060703' and this object
	W0920 20:19:31.958101  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:19 old-k8s-version-060703 kubelet[659]: E0920 20:14:19.798449     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.958686  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:20 old-k8s-version-060703 kubelet[659]: E0920 20:14:20.798130     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.958905  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:21 old-k8s-version-060703 kubelet[659]: E0920 20:14:21.540669     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.959602  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:26 old-k8s-version-060703 kubelet[659]: E0920 20:14:26.449420     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.960135  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:28 old-k8s-version-060703 kubelet[659]: E0920 20:14:28.843061     659 pod_workers.go:191] Error syncing pod 5d9634f3-9aae-4540-8972-26fbe5a73fc5 ("storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d9634f3-9aae-4540-8972-26fbe5a73fc5)"
	W0920 20:19:31.962745  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:32 old-k8s-version-060703 kubelet[659]: E0920 20:14:32.543542     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.963712  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:39 old-k8s-version-060703 kubelet[659]: E0920 20:14:39.886487     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.964295  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.448497     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.964604  946192 logs.go:138] Found kubelet problem: Sep 20 20:14:46 old-k8s-version-060703 kubelet[659]: E0920 20:14:46.535215     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.965271  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:01 old-k8s-version-060703 kubelet[659]: E0920 20:15:01.946516     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.965617  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:02 old-k8s-version-060703 kubelet[659]: E0920 20:15:02.538454     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.965982  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:06 old-k8s-version-060703 kubelet[659]: E0920 20:15:06.448416     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.971258  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:13 old-k8s-version-060703 kubelet[659]: E0920 20:15:13.548809     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.971606  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:17 old-k8s-version-060703 kubelet[659]: E0920 20:15:17.538654     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.971792  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:24 old-k8s-version-060703 kubelet[659]: E0920 20:15:24.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.972133  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:29 old-k8s-version-060703 kubelet[659]: E0920 20:15:29.535186     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.972317  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:37 old-k8s-version-060703 kubelet[659]: E0920 20:15:37.537356     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.972905  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:44 old-k8s-version-060703 kubelet[659]: E0920 20:15:44.079939     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973234  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:46 old-k8s-version-060703 kubelet[659]: E0920 20:15:46.448892     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973417  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:51 old-k8s-version-060703 kubelet[659]: E0920 20:15:51.534849     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.973746  946192 logs.go:138] Found kubelet problem: Sep 20 20:15:58 old-k8s-version-060703 kubelet[659]: E0920 20:15:58.535135     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.973930  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:03 old-k8s-version-060703 kubelet[659]: E0920 20:16:03.534859     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.974256  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:12 old-k8s-version-060703 kubelet[659]: E0920 20:16:12.534379     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.976050  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:18 old-k8s-version-060703 kubelet[659]: E0920 20:16:18.534761     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.976435  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:25 old-k8s-version-060703 kubelet[659]: E0920 20:16:25.535349     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.976649  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:33 old-k8s-version-060703 kubelet[659]: E0920 20:16:33.534743     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.977007  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:37 old-k8s-version-060703 kubelet[659]: E0920 20:16:37.534972     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.980399  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:45 old-k8s-version-060703 kubelet[659]: E0920 20:16:45.559442     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0920 20:19:31.980822  946192 logs.go:138] Found kubelet problem: Sep 20 20:16:51 old-k8s-version-060703 kubelet[659]: E0920 20:16:51.535270     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.981038  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:00 old-k8s-version-060703 kubelet[659]: E0920 20:17:00.536632     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.981668  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:05 old-k8s-version-060703 kubelet[659]: E0920 20:17:05.335746     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982024  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:06 old-k8s-version-060703 kubelet[659]: E0920 20:17:06.460448     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982239  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:11 old-k8s-version-060703 kubelet[659]: E0920 20:17:11.534880     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.982610  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:20 old-k8s-version-060703 kubelet[659]: E0920 20:17:20.534257     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.982823  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:25 old-k8s-version-060703 kubelet[659]: E0920 20:17:25.535347     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.983180  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:35 old-k8s-version-060703 kubelet[659]: E0920 20:17:35.535192     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.983395  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:40 old-k8s-version-060703 kubelet[659]: E0920 20:17:40.535020     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.983755  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:50 old-k8s-version-060703 kubelet[659]: E0920 20:17:50.534292     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.983975  946192 logs.go:138] Found kubelet problem: Sep 20 20:17:52 old-k8s-version-060703 kubelet[659]: E0920 20:17:52.534629     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.984346  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:04 old-k8s-version-060703 kubelet[659]: E0920 20:18:04.534361     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.984567  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:07 old-k8s-version-060703 kubelet[659]: E0920 20:18:07.537831     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.984930  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:15 old-k8s-version-060703 kubelet[659]: E0920 20:18:15.534869     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.985145  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:18 old-k8s-version-060703 kubelet[659]: E0920 20:18:18.534940     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.985506  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:28 old-k8s-version-060703 kubelet[659]: E0920 20:18:28.534266     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.985722  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:33 old-k8s-version-060703 kubelet[659]: E0920 20:18:33.534834     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.986084  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:42 old-k8s-version-060703 kubelet[659]: E0920 20:18:42.534713     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.988193  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.988591  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.988808  946192 logs.go:138] Found kubelet problem: Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.989172  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.989395  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.989757  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:31.989974  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:31.990351  946192 logs.go:138] Found kubelet problem: Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	I0920 20:19:31.990378  946192 logs.go:123] Gathering logs for etcd [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487] ...
	I0920 20:19:31.990410  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487"
	I0920 20:19:32.050934  946192 logs.go:123] Gathering logs for etcd [56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950] ...
	I0920 20:19:32.051019  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950"
	I0920 20:19:32.119302  946192 logs.go:123] Gathering logs for kubernetes-dashboard [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5] ...
	I0920 20:19:32.119395  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5"
	I0920 20:19:32.197875  946192 logs.go:123] Gathering logs for coredns [fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5] ...
	I0920 20:19:32.197949  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5"
	I0920 20:19:32.259354  946192 logs.go:123] Gathering logs for kube-scheduler [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd] ...
	I0920 20:19:32.259440  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd"
	I0920 20:19:32.313220  946192 logs.go:123] Gathering logs for kube-controller-manager [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b] ...
	I0920 20:19:32.313299  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b"
	I0920 20:19:32.399048  946192 logs.go:123] Gathering logs for kube-controller-manager [9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5] ...
	I0920 20:19:32.399127  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5"
	I0920 20:19:32.498371  946192 logs.go:123] Gathering logs for kindnet [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3] ...
	I0920 20:19:32.498420  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3"
	I0920 20:19:32.573011  946192 logs.go:123] Gathering logs for kindnet [dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c] ...
	I0920 20:19:32.573049  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c"
	I0920 20:19:32.631184  946192 logs.go:123] Gathering logs for dmesg ...
	I0920 20:19:32.631223  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0920 20:19:32.648374  946192 logs.go:123] Gathering logs for describe nodes ...
	I0920 20:19:32.648408  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0920 20:19:32.812294  946192 logs.go:123] Gathering logs for kube-apiserver [56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b] ...
	I0920 20:19:32.812337  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b"
	I0920 20:19:32.869625  946192 logs.go:123] Gathering logs for kube-proxy [7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca] ...
	I0920 20:19:32.869662  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca"
	I0920 20:19:32.915764  946192 logs.go:123] Gathering logs for container status ...
	I0920 20:19:32.915795  946192 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0920 20:19:32.970823  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:32.970852  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0920 20:19:32.970918  946192 out.go:270] X Problems detected in kubelet:
	W0920 20:19:32.970932  946192 out.go:270]   Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:32.970943  946192 out.go:270]   Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:32.970960  946192 out.go:270]   Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	W0920 20:19:32.970967  946192 out.go:270]   Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0920 20:19:32.970973  946192 out.go:270]   Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	I0920 20:19:32.970999  946192 out.go:358] Setting ErrFile to fd 2...
	I0920 20:19:32.971007  946192 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:19:34.718085  956612 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-975064
	
	I0920 20:19:34.718112  956612 ubuntu.go:169] provisioning hostname "embed-certs-975064"
	I0920 20:19:34.718225  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:34.737802  956612 main.go:141] libmachine: Using SSH client type: native
	I0920 20:19:34.738086  956612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0920 20:19:34.738105  956612 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-975064 && echo "embed-certs-975064" | sudo tee /etc/hostname
	I0920 20:19:34.905093  956612 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-975064
	
	I0920 20:19:34.905210  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:34.924555  956612 main.go:141] libmachine: Using SSH client type: native
	I0920 20:19:34.924809  956612 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 33443 <nil> <nil>}
	I0920 20:19:34.924839  956612 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-975064' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-975064/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-975064' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0920 20:19:35.079641  956612 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0920 20:19:35.079670  956612 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19678-734403/.minikube CaCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19678-734403/.minikube}
	I0920 20:19:35.079703  956612 ubuntu.go:177] setting up certificates
	I0920 20:19:35.079713  956612 provision.go:84] configureAuth start
	I0920 20:19:35.079785  956612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-975064
	I0920 20:19:35.098952  956612 provision.go:143] copyHostCerts
	I0920 20:19:35.099037  956612 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem, removing ...
	I0920 20:19:35.099052  956612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem
	I0920 20:19:35.099150  956612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/key.pem (1679 bytes)
	I0920 20:19:35.099259  956612 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem, removing ...
	I0920 20:19:35.099270  956612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem
	I0920 20:19:35.099301  956612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/ca.pem (1078 bytes)
	I0920 20:19:35.099360  956612 exec_runner.go:144] found /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem, removing ...
	I0920 20:19:35.099371  956612 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem
	I0920 20:19:35.099395  956612 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19678-734403/.minikube/cert.pem (1123 bytes)
	I0920 20:19:35.099452  956612 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem org=jenkins.embed-certs-975064 san=[127.0.0.1 192.168.85.2 embed-certs-975064 localhost minikube]
	I0920 20:19:35.371403  956612 provision.go:177] copyRemoteCerts
	I0920 20:19:35.371482  956612 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0920 20:19:35.371527  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:35.389631  956612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa Username:docker}
	I0920 20:19:35.496099  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0920 20:19:35.524669  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0920 20:19:35.551229  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0920 20:19:35.577672  956612 provision.go:87] duration metric: took 497.944538ms to configureAuth
	I0920 20:19:35.577706  956612 ubuntu.go:193] setting minikube options for container-runtime
	I0920 20:19:35.577905  956612 config.go:182] Loaded profile config "embed-certs-975064": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 20:19:35.577916  956612 machine.go:96] duration metric: took 4.045537325s to provisionDockerMachine
	I0920 20:19:35.577923  956612 client.go:171] duration metric: took 10.60373805s to LocalClient.Create
	I0920 20:19:35.577938  956612 start.go:167] duration metric: took 10.603816598s to libmachine.API.Create "embed-certs-975064"
	I0920 20:19:35.577953  956612 start.go:293] postStartSetup for "embed-certs-975064" (driver="docker")
	I0920 20:19:35.577962  956612 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0920 20:19:35.578029  956612 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0920 20:19:35.578074  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:35.595382  956612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa Username:docker}
	I0920 20:19:35.700265  956612 ssh_runner.go:195] Run: cat /etc/os-release
	I0920 20:19:35.703755  956612 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0920 20:19:35.703795  956612 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0920 20:19:35.703808  956612 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0920 20:19:35.703818  956612 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0920 20:19:35.703839  956612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/addons for local assets ...
	I0920 20:19:35.703907  956612 filesync.go:126] Scanning /home/jenkins/minikube-integration/19678-734403/.minikube/files for local assets ...
	I0920 20:19:35.703995  956612 filesync.go:149] local asset: /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem -> 7397872.pem in /etc/ssl/certs
	I0920 20:19:35.704125  956612 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0920 20:19:35.713428  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem --> /etc/ssl/certs/7397872.pem (1708 bytes)
	I0920 20:19:35.740943  956612 start.go:296] duration metric: took 162.974314ms for postStartSetup
	I0920 20:19:35.741328  956612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-975064
	I0920 20:19:35.758975  956612 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/config.json ...
	I0920 20:19:35.759283  956612 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 20:19:35.759335  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:35.780986  956612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa Username:docker}
	I0920 20:19:35.880505  956612 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0920 20:19:35.885376  956612 start.go:128] duration metric: took 10.915540934s to createHost
	I0920 20:19:35.885403  956612 start.go:83] releasing machines lock for "embed-certs-975064", held for 10.91571014s
	I0920 20:19:35.885486  956612 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-975064
	I0920 20:19:35.902600  956612 ssh_runner.go:195] Run: cat /version.json
	I0920 20:19:35.902644  956612 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0920 20:19:35.902655  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:35.902714  956612 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-975064
	I0920 20:19:35.920927  956612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa Username:docker}
	I0920 20:19:35.941092  956612 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/embed-certs-975064/id_rsa Username:docker}
	I0920 20:19:36.177180  956612 ssh_runner.go:195] Run: systemctl --version
	I0920 20:19:36.181894  956612 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0920 20:19:36.186397  956612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0920 20:19:36.213669  956612 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0920 20:19:36.213814  956612 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0920 20:19:36.244589  956612 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0920 20:19:36.244617  956612 start.go:495] detecting cgroup driver to use...
	I0920 20:19:36.244651  956612 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0920 20:19:36.244715  956612 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0920 20:19:36.257782  956612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0920 20:19:36.270479  956612 docker.go:217] disabling cri-docker service (if available) ...
	I0920 20:19:36.270548  956612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0920 20:19:36.286597  956612 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0920 20:19:36.302185  956612 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0920 20:19:36.391285  956612 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0920 20:19:36.499263  956612 docker.go:233] disabling docker service ...
	I0920 20:19:36.499336  956612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0920 20:19:36.521832  956612 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0920 20:19:36.538414  956612 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0920 20:19:36.668364  956612 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0920 20:19:36.772114  956612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0920 20:19:36.788006  956612 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0920 20:19:36.807337  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0920 20:19:36.818685  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0920 20:19:36.831593  956612 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0920 20:19:36.831711  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0920 20:19:36.843273  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:19:36.856189  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0920 20:19:36.866942  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0920 20:19:36.877472  956612 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0920 20:19:36.888109  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0920 20:19:36.898812  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0920 20:19:36.909665  956612 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0920 20:19:36.920101  956612 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0920 20:19:36.930595  956612 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0920 20:19:36.939347  956612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:19:37.069056  956612 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0920 20:19:37.226654  956612 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0920 20:19:37.226788  956612 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0920 20:19:37.231424  956612 start.go:563] Will wait 60s for crictl version
	I0920 20:19:37.231551  956612 ssh_runner.go:195] Run: which crictl
	I0920 20:19:37.235373  956612 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0920 20:19:37.278362  956612 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0920 20:19:37.278441  956612 ssh_runner.go:195] Run: containerd --version
	I0920 20:19:37.303240  956612 ssh_runner.go:195] Run: containerd --version
	I0920 20:19:37.329153  956612 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0920 20:19:37.331385  956612 cli_runner.go:164] Run: docker network inspect embed-certs-975064 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0920 20:19:37.348468  956612 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0920 20:19:37.352598  956612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:19:37.363789  956612 kubeadm.go:883] updating cluster {Name:embed-certs-975064 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-975064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0920 20:19:37.363913  956612 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 20:19:37.363978  956612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:19:37.406786  956612 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 20:19:37.406817  956612 containerd.go:534] Images already preloaded, skipping extraction
	I0920 20:19:37.406894  956612 ssh_runner.go:195] Run: sudo crictl images --output json
	I0920 20:19:37.453863  956612 containerd.go:627] all images are preloaded for containerd runtime.
	I0920 20:19:37.453891  956612 cache_images.go:84] Images are preloaded, skipping loading
	I0920 20:19:37.453899  956612 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.31.1 containerd true true} ...
	I0920 20:19:37.454060  956612 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-975064 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:embed-certs-975064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0920 20:19:37.454154  956612 ssh_runner.go:195] Run: sudo crictl info
	I0920 20:19:37.498090  956612 cni.go:84] Creating CNI manager for ""
	I0920 20:19:37.498116  956612 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 20:19:37.498128  956612 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0920 20:19:37.498151  956612 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-975064 NodeName:embed-certs-975064 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0920 20:19:37.498286  956612 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-975064"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0920 20:19:37.498392  956612 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0920 20:19:37.508456  956612 binaries.go:44] Found k8s binaries, skipping transfer
	I0920 20:19:37.508578  956612 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0920 20:19:37.517859  956612 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0920 20:19:37.538864  956612 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0920 20:19:37.560650  956612 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0920 20:19:37.581056  956612 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0920 20:19:37.584799  956612 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0920 20:19:37.596251  956612 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0920 20:19:37.687399  956612 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0920 20:19:37.703590  956612 certs.go:68] Setting up /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064 for IP: 192.168.85.2
	I0920 20:19:37.703628  956612 certs.go:194] generating shared ca certs ...
	I0920 20:19:37.703646  956612 certs.go:226] acquiring lock for ca certs: {Name:mk05671cd2fa7cea0f374261a29f5dc2649893f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:37.703858  956612 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key
	I0920 20:19:37.703908  956612 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key
	I0920 20:19:37.703918  956612 certs.go:256] generating profile certs ...
	I0920 20:19:37.703985  956612 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.key
	I0920 20:19:37.704012  956612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.crt with IP's: []
	I0920 20:19:38.301717  956612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.crt ...
	I0920 20:19:38.301751  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.crt: {Name:mk3193d4c55a1a9a1ebe7f4170654dc5aa5a24c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:38.302583  956612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.key ...
	I0920 20:19:38.302601  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/client.key: {Name:mk168bf2ef553da2d6cf12767a78d2e6a02cff76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:38.302709  956612 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key.9f9cdf0c
	I0920 20:19:38.302729  956612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt.9f9cdf0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0920 20:19:38.926799  956612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt.9f9cdf0c ...
	I0920 20:19:38.926871  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt.9f9cdf0c: {Name:mkc3d35e4c179fbdc3879efadb942b6f297b1582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:38.927522  956612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key.9f9cdf0c ...
	I0920 20:19:38.927541  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key.9f9cdf0c: {Name:mk5d0b245a41abac6a4654bd2bdf9adc169cd95a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:38.927644  956612 certs.go:381] copying /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt.9f9cdf0c -> /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt
	I0920 20:19:38.927736  956612 certs.go:385] copying /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key.9f9cdf0c -> /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key
	I0920 20:19:38.927831  956612 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.key
	I0920 20:19:38.927850  956612 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.crt with IP's: []
	I0920 20:19:39.528444  956612 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.crt ...
	I0920 20:19:39.528524  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.crt: {Name:mkcf8cfe843fbcbfd640091d517af0fb30897eb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:39.529187  956612 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.key ...
	I0920 20:19:39.529221  956612 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.key: {Name:mkb4d08d9d9668f80eda222ff41551f1197d81a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 20:19:39.529444  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787.pem (1338 bytes)
	W0920 20:19:39.529491  956612 certs.go:480] ignoring /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787_empty.pem, impossibly tiny 0 bytes
	I0920 20:19:39.529515  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca-key.pem (1675 bytes)
	I0920 20:19:39.529544  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/ca.pem (1078 bytes)
	I0920 20:19:39.529575  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/cert.pem (1123 bytes)
	I0920 20:19:39.529600  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/certs/key.pem (1679 bytes)
	I0920 20:19:39.529643  956612 certs.go:484] found cert: /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem (1708 bytes)
	I0920 20:19:39.530260  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0920 20:19:39.557131  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0920 20:19:39.584469  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0920 20:19:39.611083  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0920 20:19:39.636571  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0920 20:19:39.663002  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0920 20:19:39.689052  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0920 20:19:39.714564  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/embed-certs-975064/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0920 20:19:39.743464  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/ssl/certs/7397872.pem --> /usr/share/ca-certificates/7397872.pem (1708 bytes)
	I0920 20:19:39.771375  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0920 20:19:39.798844  956612 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19678-734403/.minikube/certs/739787.pem --> /usr/share/ca-certificates/739787.pem (1338 bytes)
	I0920 20:19:39.825449  956612 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0920 20:19:39.844688  956612 ssh_runner.go:195] Run: openssl version
	I0920 20:19:39.854497  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0920 20:19:39.865103  956612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:19:39.869154  956612 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 20 19:23 /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:19:39.869223  956612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0920 20:19:39.876583  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0920 20:19:39.886082  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/739787.pem && ln -fs /usr/share/ca-certificates/739787.pem /etc/ssl/certs/739787.pem"
	I0920 20:19:39.896015  956612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/739787.pem
	I0920 20:19:39.899712  956612 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 20 19:33 /usr/share/ca-certificates/739787.pem
	I0920 20:19:39.899791  956612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/739787.pem
	I0920 20:19:39.907486  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/739787.pem /etc/ssl/certs/51391683.0"
	I0920 20:19:39.917516  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7397872.pem && ln -fs /usr/share/ca-certificates/7397872.pem /etc/ssl/certs/7397872.pem"
	I0920 20:19:39.927664  956612 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7397872.pem
	I0920 20:19:39.931688  956612 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 20 19:33 /usr/share/ca-certificates/7397872.pem
	I0920 20:19:39.931793  956612 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7397872.pem
	I0920 20:19:39.938928  956612 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/7397872.pem /etc/ssl/certs/3ec20f2e.0"
	I0920 20:19:39.948398  956612 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0920 20:19:39.951813  956612 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0920 20:19:39.951867  956612 kubeadm.go:392] StartCluster: {Name:embed-certs-975064 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-975064 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 20:19:39.951948  956612 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0920 20:19:39.952009  956612 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0920 20:19:39.993190  956612 cri.go:89] found id: ""
	I0920 20:19:39.993268  956612 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0920 20:19:40.012742  956612 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0920 20:19:40.051503  956612 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0920 20:19:40.051649  956612 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0920 20:19:40.063048  956612 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0920 20:19:40.063124  956612 kubeadm.go:157] found existing configuration files:
	
	I0920 20:19:40.063235  956612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0920 20:19:40.073259  956612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0920 20:19:40.073343  956612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0920 20:19:40.083238  956612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0920 20:19:40.093051  956612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0920 20:19:40.093127  956612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0920 20:19:40.102687  956612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0920 20:19:40.112808  956612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0920 20:19:40.112923  956612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0920 20:19:40.123425  956612 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0920 20:19:40.135926  956612 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0920 20:19:40.135999  956612 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0920 20:19:40.146982  956612 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0920 20:19:40.204814  956612 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0920 20:19:40.205127  956612 kubeadm.go:310] [preflight] Running pre-flight checks
	I0920 20:19:40.223582  956612 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0920 20:19:40.223732  956612 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0920 20:19:40.223789  956612 kubeadm.go:310] OS: Linux
	I0920 20:19:40.223869  956612 kubeadm.go:310] CGROUPS_CPU: enabled
	I0920 20:19:40.223946  956612 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0920 20:19:40.224020  956612 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0920 20:19:40.224100  956612 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0920 20:19:40.224177  956612 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0920 20:19:40.224255  956612 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0920 20:19:40.224330  956612 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0920 20:19:40.224411  956612 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0920 20:19:40.224486  956612 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0920 20:19:40.295246  956612 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0920 20:19:40.295381  956612 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0920 20:19:40.295515  956612 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0920 20:19:40.301361  956612 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0920 20:19:42.971525  946192 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0920 20:19:42.984094  946192 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0920 20:19:42.986656  946192 out.go:201] 
	W0920 20:19:42.989157  946192 out.go:270] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0920 20:19:42.989263  946192 out.go:270] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0920 20:19:42.989340  946192 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0920 20:19:42.989378  946192 out.go:270] * 
	W0920 20:19:42.990441  946192 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0920 20:19:42.992117  946192 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	35be92d89af3c       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   638e7f6a15bd9       dashboard-metrics-scraper-8d5bb5db8-lhw7g
	9aefdeb7d5aac       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         2                   8b8b61493d018       storage-provisioner
	cb307e7144c5a       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   4efeb0b50fdf2       kubernetes-dashboard-cd95d586-4ln9z
	81118d4396158       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   8b8b61493d018       storage-provisioner
	9f93cfb56d506       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   af59680067767       kube-proxy-vnktx
	4d7b9fa90b0c8       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   ce7627baaeeb1       kindnet-5pl4k
	5e6abd9a0f192       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   5e27e4b803ae4       coredns-74ff55c5b-5cx2l
	6c6c51ad7a859       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   5bdbe92b4c057       busybox
	9830062e4a05b       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   07d27aeb42058       kube-apiserver-old-k8s-version-060703
	fd2084c4d424a       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   28db46e894dc6       kube-scheduler-old-k8s-version-060703
	2e3b79d7c9281       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   86e403e8e9ca2       kube-controller-manager-old-k8s-version-060703
	a71442909c285       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   ba0ef78df8233       etcd-old-k8s-version-060703
	57d416c06de15       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   063a531c68aa1       busybox
	fd0a5090e2660       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   1a745900051a5       coredns-74ff55c5b-5cx2l
	dd5d1d4000162       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   09a2eb3c0927a       kindnet-5pl4k
	7f96065f6406f       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   8442a587c21d5       kube-proxy-vnktx
	115b7d1c4f8b9       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   45170ed78e221       kube-scheduler-old-k8s-version-060703
	9676eacbfe78c       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   dcb488fd3cfd5       kube-controller-manager-old-k8s-version-060703
	56e62195e5537       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   b858a82b16d30       kube-apiserver-old-k8s-version-060703
	56bf62a945c88       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   4fb27c42226b5       etcd-old-k8s-version-060703
	
	
	==> containerd <==
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.572929277Z" level=info msg="CreateContainer within sandbox \"638e7f6a15bd9578ec41424fe5fdb6eb15497479466466f18be71bf21676a7f5\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed\""
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.573946598Z" level=info msg="StartContainer for \"2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed\""
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.650604227Z" level=info msg="StartContainer for \"2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed\" returns successfully"
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.680834283Z" level=info msg="shim disconnected" id=2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed namespace=k8s.io
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.680899645Z" level=warning msg="cleaning up after shim disconnected" id=2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed namespace=k8s.io
	Sep 20 20:15:43 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:43.680912150Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 20:15:44 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:44.082549207Z" level=info msg="RemoveContainer for \"2e8968e4746464c4d52b1ed5d98f9c82b2b4fb63d51e2d45a40adc902e890ac3\""
	Sep 20 20:15:44 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:15:44.088632766Z" level=info msg="RemoveContainer for \"2e8968e4746464c4d52b1ed5d98f9c82b2b4fb63d51e2d45a40adc902e890ac3\" returns successfully"
	Sep 20 20:16:45 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:16:45.545028310Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:16:45 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:16:45.555629070Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 20 20:16:45 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:16:45.557770506Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 20:16:45 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:16:45.557803376Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.536927935Z" level=info msg="CreateContainer within sandbox \"638e7f6a15bd9578ec41424fe5fdb6eb15497479466466f18be71bf21676a7f5\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.553916417Z" level=info msg="CreateContainer within sandbox \"638e7f6a15bd9578ec41424fe5fdb6eb15497479466466f18be71bf21676a7f5\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6\""
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.555242897Z" level=info msg="StartContainer for \"35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6\""
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.633740453Z" level=info msg="StartContainer for \"35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6\" returns successfully"
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.667313431Z" level=info msg="shim disconnected" id=35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6 namespace=k8s.io
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.667384183Z" level=warning msg="cleaning up after shim disconnected" id=35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6 namespace=k8s.io
	Sep 20 20:17:04 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:04.667396696Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 20 20:17:05 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:05.346648244Z" level=info msg="RemoveContainer for \"2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed\""
	Sep 20 20:17:05 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:17:05.353273709Z" level=info msg="RemoveContainer for \"2821ebb650a45d649a883420de1d86d32c3547933a54ffd8d147622528605eed\" returns successfully"
	Sep 20 20:19:32 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:19:32.537146710Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:19:32 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:19:32.546020452Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 20 20:19:32 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:19:32.548099366Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 20:19:32 old-k8s-version-060703 containerd[570]: time="2024-09-20T20:19:32.548278959Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [5e6abd9a0f1925068f8bb29c0c8890d917e637784ba74e694a9c4956a1e767b5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:33615 - 30120 "HINFO IN 927412840456619104.6578175384974482097. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013169286s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0920 20:14:26.129389       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 20:13:56.128770622 +0000 UTC m=+0.022948371) (total time: 30.000507029s):
	Trace[2019727887]: [30.000507029s] [30.000507029s] END
	E0920 20:14:26.129429       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 20:14:26.129711       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 20:13:56.129338835 +0000 UTC m=+0.023516535) (total time: 30.000358361s):
	Trace[939984059]: [30.000358361s] [30.000358361s] END
	E0920 20:14:26.129729       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0920 20:14:26.129798       1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-20 20:13:56.129602187 +0000 UTC m=+0.023779895) (total time: 30.00018593s):
	Trace[911902081]: [30.00018593s] [30.00018593s] END
	E0920 20:14:26.129809       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [fd0a5090e2660a82b741d74d8eb38070de61c287064c5793d8e1f1a5730379b5] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:57767 - 18831 "HINFO IN 4396516519228299135.8704599364568444171. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.005162659s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-060703
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-060703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=57d42ff8d541388826f5d9c37044129ec69c3d0a
	                    minikube.k8s.io/name=old-k8s-version-060703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_20T20_11_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 20 Sep 2024 20:10:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-060703
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 20 Sep 2024 20:19:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 20 Sep 2024 20:14:54 +0000   Fri, 20 Sep 2024 20:10:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 20 Sep 2024 20:14:54 +0000   Fri, 20 Sep 2024 20:10:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 20 Sep 2024 20:14:54 +0000   Fri, 20 Sep 2024 20:10:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 20 Sep 2024 20:14:54 +0000   Fri, 20 Sep 2024 20:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-060703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 3a930d708ea648e99e871be42989f1d3
	  System UUID:                a55c664a-e407-453b-bfe7-ce04de08807d
	  Boot ID:                    cfeac633-1b4b-4878-a7d1-bdd76da68a0f
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m39s
	  kube-system                 coredns-74ff55c5b-5cx2l                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m28s
	  kube-system                 etcd-old-k8s-version-060703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m34s
	  kube-system                 kindnet-5pl4k                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m28s
	  kube-system                 kube-apiserver-old-k8s-version-060703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-controller-manager-old-k8s-version-060703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 kube-proxy-vnktx                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m28s
	  kube-system                 kube-scheduler-old-k8s-version-060703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m34s
	  kube-system                 metrics-server-9975d5f86-rrgz6                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m28s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m26s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-lhw7g         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-4ln9z               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m54s (x5 over 8m54s)  kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m54s (x5 over 8m54s)  kubelet     Node old-k8s-version-060703 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m54s (x5 over 8m54s)  kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m35s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m34s                  kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m34s                  kubelet     Node old-k8s-version-060703 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m34s                  kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m34s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m28s                  kubelet     Node old-k8s-version-060703 status is now: NodeReady
	  Normal  Starting                 8m27s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m                     kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m (x8 over 6m)        kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m (x8 over 6m)        kubelet     Node old-k8s-version-060703 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x7 over 6m)        kubelet     Node old-k8s-version-060703 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m                     kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m47s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	
	
	==> etcd [56bf62a945c8839e0c16837894b5b12d033770cf63da3df06224c80bc53a0950] <==
	raft2024/09/20 20:10:53 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/20 20:10:53 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/20 20:10:53 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-20 20:10:53.099881 I | etcdserver: published {Name:old-k8s-version-060703 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-20 20:10:53.100288 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-20 20:10:53.100509 I | embed: ready to serve client requests
	2024-09-20 20:10:53.102200 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-20 20:10:53.102389 I | embed: ready to serve client requests
	2024-09-20 20:10:53.110353 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-20 20:10:53.110625 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-20 20:10:53.163018 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-20 20:11:12.853353 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:11:14.029633 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:11:24.029901 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:11:34.029850 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:11:44.029837 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:11:54.029931 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:04.029885 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:14.029987 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:24.029935 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:34.029745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:44.029900 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:12:54.030102 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:13:04.029908 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:13:14.030054 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [a71442909c285554343e6076272da58df7a7993274bc073cc91d5d882d30b487] <==
	2024-09-20 20:15:39.758498 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:15:49.758688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:15:59.758568 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:09.758684 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:19.758575 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:29.758609 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:39.758666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:49.758666 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:16:59.758845 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:09.758698 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:19.758587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:29.758892 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:39.758525 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:49.758688 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:17:59.758554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:09.758605 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:19.758726 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:29.758537 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:39.758695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:49.758608 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:18:59.758648 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:19:09.758757 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:19:19.759326 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:19:29.758794 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-20 20:19:39.758616 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 20:19:45 up  4:02,  0 users,  load average: 0.84, 1.97, 2.69
	Linux old-k8s-version-060703 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4d7b9fa90b0c82f4ee82283fad6ca39bd72b6b6e3d95469d6e91454db15ea1c3] <==
	I0920 20:17:37.346666       1 main.go:299] handling current node
	I0920 20:17:47.347000       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:17:47.347038       1 main.go:299] handling current node
	I0920 20:17:57.341121       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:17:57.341158       1 main.go:299] handling current node
	I0920 20:18:07.348330       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:07.348366       1 main.go:299] handling current node
	I0920 20:18:17.348877       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:17.348968       1 main.go:299] handling current node
	I0920 20:18:27.347448       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:27.347490       1 main.go:299] handling current node
	I0920 20:18:37.346042       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:37.346099       1 main.go:299] handling current node
	I0920 20:18:47.349566       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:47.349602       1 main.go:299] handling current node
	I0920 20:18:57.341107       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:18:57.341149       1 main.go:299] handling current node
	I0920 20:19:07.344183       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:19:07.344221       1 main.go:299] handling current node
	I0920 20:19:17.349756       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:19:17.349794       1 main.go:299] handling current node
	I0920 20:19:27.350003       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:19:27.350039       1 main.go:299] handling current node
	I0920 20:19:37.346463       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:19:37.346496       1 main.go:299] handling current node
	
	
	==> kindnet [dd5d1d400016208884d0eaea14b265bc3034712700233829f2df3a399548ee6c] <==
	I0920 20:11:21.537737       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0920 20:11:21.537777       1 metrics.go:61] Registering metrics
	I0920 20:11:21.537843       1 controller.go:374] Syncing nftables rules
	I0920 20:11:31.335299       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:11:31.335339       1 main.go:299] handling current node
	I0920 20:11:41.335612       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:11:41.335653       1 main.go:299] handling current node
	I0920 20:11:51.344547       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:11:51.344587       1 main.go:299] handling current node
	I0920 20:12:01.343676       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:01.343716       1 main.go:299] handling current node
	I0920 20:12:11.335280       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:11.335317       1 main.go:299] handling current node
	I0920 20:12:21.339887       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:21.339936       1 main.go:299] handling current node
	I0920 20:12:31.336062       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:31.336100       1 main.go:299] handling current node
	I0920 20:12:41.342421       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:41.342458       1 main.go:299] handling current node
	I0920 20:12:51.335319       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:12:51.335364       1 main.go:299] handling current node
	I0920 20:13:01.342432       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:13:01.342471       1 main.go:299] handling current node
	I0920 20:13:11.335531       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0920 20:13:11.335573       1 main.go:299] handling current node
	
	
	==> kube-apiserver [56e62195e5537fcd4d8f51f18dcf07839a5cb4f372d4ba26811304c694a6346b] <==
	I0920 20:10:59.875635       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0920 20:10:59.882669       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0920 20:10:59.882699       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0920 20:11:00.757575       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0920 20:11:00.801872       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0920 20:11:00.890229       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0920 20:11:00.891619       1 controller.go:606] quota admission added evaluator for: endpoints
	I0920 20:11:00.895963       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0920 20:11:01.541201       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0920 20:11:02.534465       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0920 20:11:02.650480       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0920 20:11:11.029445       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0920 20:11:17.506521       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0920 20:11:17.619582       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0920 20:11:31.473801       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:11:31.473847       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:11:31.473856       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 20:12:07.230952       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:12:07.231000       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:12:07.231009       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 20:12:48.794051       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:12:48.794158       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:12:48.794235       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	E0920 20:13:15.848200       1 upgradeaware.go:373] Error proxying data from client to backend: write tcp 192.168.76.2:43650->192.168.76.2:10250: write: broken pipe
	E0920 20:13:15.848326       1 upgradeaware.go:387] Error proxying data from backend to client: tls: use of closed connection
	
	
	==> kube-apiserver [9830062e4a05b8c5621d052c0ac7415aafaba5aa2870be98294da31078a6f9cf] <==
	I0920 20:16:17.172265       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:16:17.172457       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 20:16:56.442622       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 20:16:56.442702       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 20:16:56.442710       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 20:16:58.115981       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:16:58.116030       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:16:58.116040       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 20:17:32.950164       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:17:32.950209       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:17:32.950218       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 20:18:12.830163       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:18:12.830224       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:18:12.830255       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0920 20:18:54.347535       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:18:54.347581       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:18:54.347590       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0920 20:18:54.827979       1 handler_proxy.go:102] no RequestInfo found in the context
	E0920 20:18:54.828254       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0920 20:18:54.828425       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0920 20:19:29.299083       1 client.go:360] parsed scheme: "passthrough"
	I0920 20:19:29.299130       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0920 20:19:29.299139       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [2e3b79d7c928117e9302779631aaeb7c221344a58cc9252788340585a2c2904b] <==
	W0920 20:15:17.062616       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:15:44.570590       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:15:48.713067       1 request.go:655] Throttling request took 1.048291952s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0920 20:15:49.564589       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:16:15.073135       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:16:21.215105       1 request.go:655] Throttling request took 1.047896565s, request: GET:https://192.168.76.2:8443/apis/apiextensions.k8s.io/v1beta1?timeout=32s
	W0920 20:16:22.066775       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:16:45.575826       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:16:53.717237       1 request.go:655] Throttling request took 1.048346437s, request: GET:https://192.168.76.2:8443/apis/certificates.k8s.io/v1beta1?timeout=32s
	W0920 20:16:54.568701       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:17:16.084185       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:17:26.219231       1 request.go:655] Throttling request took 1.048296643s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 20:17:27.071016       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:17:46.586210       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:17:58.721536       1 request.go:655] Throttling request took 1.048350936s, request: GET:https://192.168.76.2:8443/apis/events.k8s.io/v1?timeout=32s
	W0920 20:17:59.573129       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:18:17.088038       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:18:31.223830       1 request.go:655] Throttling request took 1.044888866s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 20:18:32.075228       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:18:47.590218       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:19:03.727590       1 request.go:655] Throttling request took 1.048839563s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 20:19:04.577411       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0920 20:19:18.093376       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0920 20:19:36.227832       1 request.go:655] Throttling request took 1.048225201s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0920 20:19:37.079513       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	
	==> kube-controller-manager [9676eacbfe78c48f7c25c60b7943745f609bdc8b67c2d5f0f8b2b4f1a4e432c5] <==
	I0920 20:11:17.685455       1 shared_informer.go:247] Caches are synced for expand 
	I0920 20:11:17.712701       1 shared_informer.go:247] Caches are synced for attach detach 
	I0920 20:11:17.723963       1 shared_informer.go:247] Caches are synced for resource quota 
	I0920 20:11:17.738797       1 shared_informer.go:247] Caches are synced for resource quota 
	I0920 20:11:17.757044       1 shared_informer.go:247] Caches are synced for disruption 
	I0920 20:11:17.757066       1 disruption.go:339] Sending events to api server.
	I0920 20:11:17.787548       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5pl4k"
	I0920 20:11:17.788672       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vnktx"
	I0920 20:11:17.788992       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-jplz2"
	I0920 20:11:17.960230       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0920 20:11:18.095299       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5cx2l"
	I0920 20:11:18.162884       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0920 20:11:18.162904       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	E0920 20:11:18.163924       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3b9a7e9b-0a3f-4fd0-a4de-da61c3fbbb8f", ResourceVersion:"266", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862459862, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4000754720), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4000754740)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.
LabelSelector)(0x4000754760), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.Gl
usterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4000deec80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4000754
780), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeS
ource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x40007547a0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil),
AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x4000754840)}}, Resources:v1.R
esourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4000f60480), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPo
licy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40015e3af8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x4000a57420), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), Runtime
ClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4000750730)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40015e3b48)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	I0920 20:11:18.167407       1 shared_informer.go:247] Caches are synced for garbage collector 
	E0920 20:11:18.207549       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"3b9a7e9b-0a3f-4fd0-a4de-da61c3fbbb8f", ResourceVersion:"405", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63862459862, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x40021823c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x40021823e0)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x4002182400), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x4002182420)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x4002182440), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x40020e68c0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002182460), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x4002182480), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x40021824c0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x40020ceae0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x40020fe5d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004bb1f0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x40015db360)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x40020fe628)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0920 20:11:19.419938       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0920 20:11:19.447983       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-jplz2"
	I0920 20:11:22.664749       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0920 20:11:22.666061       1 event.go:291] "Event occurred" object="kube-system/storage-provisioner" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0920 20:11:22.666163       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-jplz2" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-jplz2"
	I0920 20:11:22.666386       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b-5cx2l" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-74ff55c5b-5cx2l"
	I0920 20:13:16.893057       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0920 20:13:17.089635       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	E0920 20:13:17.143825       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [7f96065f6406f118cc78e2beb4ce41b67935ceefa21d878ac71c7f147289d9ca] <==
	I0920 20:11:18.653292       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0920 20:11:18.653400       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0920 20:11:18.698983       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 20:11:18.699225       1 server_others.go:185] Using iptables Proxier.
	I0920 20:11:18.703323       1 server.go:650] Version: v1.20.0
	I0920 20:11:18.704236       1 config.go:315] Starting service config controller
	I0920 20:11:18.704255       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 20:11:18.705380       1 config.go:224] Starting endpoint slice config controller
	I0920 20:11:18.705395       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 20:11:18.805449       1 shared_informer.go:247] Caches are synced for service config 
	I0920 20:11:18.805540       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [9f93cfb56d5061f27e2716d238b65361ae444800034f643bfaf255281067ab06] <==
	I0920 20:13:58.102038       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0920 20:13:58.102345       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0920 20:13:58.138187       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0920 20:13:58.138673       1 server_others.go:185] Using iptables Proxier.
	I0920 20:13:58.139091       1 server.go:650] Version: v1.20.0
	I0920 20:13:58.140798       1 config.go:315] Starting service config controller
	I0920 20:13:58.140972       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0920 20:13:58.141111       1 config.go:224] Starting endpoint slice config controller
	I0920 20:13:58.141198       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0920 20:13:58.241212       1 shared_informer.go:247] Caches are synced for service config 
	I0920 20:13:58.242395       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [115b7d1c4f8b9cdba309973fd5aec53ec56a81f42cdf7e9b8a9690ba26529a7c] <==
	I0920 20:10:59.173691       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0920 20:10:59.189022       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 20:10:59.189251       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:10:59.189475       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0920 20:10:59.189657       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:10:59.189840       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0920 20:10:59.189984       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:10:59.190226       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:10:59.190463       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:10:59.190601       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 20:10:59.190730       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 20:10:59.191283       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:10:59.197907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 20:11:00.031451       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0920 20:11:00.127572       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0920 20:11:00.127709       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0920 20:11:00.133307       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0920 20:11:00.288246       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0920 20:11:00.288294       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0920 20:11:00.334608       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0920 20:11:00.336165       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0920 20:11:00.428061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0920 20:11:00.533904       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0920 20:11:00.582801       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0920 20:11:01.976877       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [fd2084c4d424a4e4be4ff1e6ec5a2989267fb40dd10997823ec9709aafae4bcd] <==
	I0920 20:13:48.885123       1 serving.go:331] Generated self-signed cert in-memory
	W0920 20:13:53.736408       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0920 20:13:53.736452       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0920 20:13:53.736462       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0920 20:13:53.736467       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0920 20:13:53.981389       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0920 20:13:53.984798       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 20:13:53.984852       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0920 20:13:53.984875       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0920 20:13:54.086766       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 20 20:18:15 old-k8s-version-060703 kubelet[659]: I0920 20:18:15.534463     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:18:15 old-k8s-version-060703 kubelet[659]: E0920 20:18:15.534869     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:18:18 old-k8s-version-060703 kubelet[659]: E0920 20:18:18.534940     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:18:28 old-k8s-version-060703 kubelet[659]: I0920 20:18:28.533915     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:18:28 old-k8s-version-060703 kubelet[659]: E0920 20:18:28.534266     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:18:33 old-k8s-version-060703 kubelet[659]: E0920 20:18:33.534834     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:18:42 old-k8s-version-060703 kubelet[659]: I0920 20:18:42.534102     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:18:42 old-k8s-version-060703 kubelet[659]: E0920 20:18:42.534713     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:18:46 old-k8s-version-060703 kubelet[659]: E0920 20:18:46.535307     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: I0920 20:18:54.533946     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:18:54 old-k8s-version-060703 kubelet[659]: E0920 20:18:54.534370     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:18:59 old-k8s-version-060703 kubelet[659]: E0920 20:18:59.534951     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: I0920 20:19:08.533947     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:19:08 old-k8s-version-060703 kubelet[659]: E0920 20:19:08.534284     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:19:10 old-k8s-version-060703 kubelet[659]: E0920 20:19:10.534698     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: I0920 20:19:20.533955     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:19:20 old-k8s-version-060703 kubelet[659]: E0920 20:19:20.534289     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:19:21 old-k8s-version-060703 kubelet[659]: E0920 20:19:21.534640     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: I0920 20:19:31.537444     659 scope.go:95] [topologymanager] RemoveContainer - Container ID: 35be92d89af3cc5401fdd10445e6fc044327c8a6b71254445df33d3bd3cc37e6
	Sep 20 20:19:31 old-k8s-version-060703 kubelet[659]: E0920 20:19:31.540426     659 pod_workers.go:191] Error syncing pod 7f1dbe5e-72ff-4078-b75f-c9cd60fff914 ("dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-lhw7g_kubernetes-dashboard(7f1dbe5e-72ff-4078-b75f-c9cd60fff914)"
	Sep 20 20:19:32 old-k8s-version-060703 kubelet[659]: E0920 20:19:32.548643     659 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 20:19:32 old-k8s-version-060703 kubelet[659]: E0920 20:19:32.548715     659 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 20:19:32 old-k8s-version-060703 kubelet[659]: E0920 20:19:32.548894     659 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-69pvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-rrgz6_kube-system(591cce3
4-2afa-4ca0-b839-bcf4a9126af9): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 20 20:19:32 old-k8s-version-060703 kubelet[659]: E0920 20:19:32.548953     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 20 20:19:43 old-k8s-version-060703 kubelet[659]: E0920 20:19:43.559029     659 pod_workers.go:191] Error syncing pod 591cce34-2afa-4ca0-b839-bcf4a9126af9 ("metrics-server-9975d5f86-rrgz6_kube-system(591cce34-2afa-4ca0-b839-bcf4a9126af9)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [cb307e7144c5a3b083f3c34331c270026a1c7b317a40bed72b85f099a061f5b5] <==
	2024/09/20 20:14:24 Starting overwatch
	2024/09/20 20:14:24 Using namespace: kubernetes-dashboard
	2024/09/20 20:14:24 Using in-cluster config to connect to apiserver
	2024/09/20 20:14:24 Using secret token for csrf signing
	2024/09/20 20:14:24 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/20 20:14:24 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/20 20:14:24 Successful initial request to the apiserver, version: v1.20.0
	2024/09/20 20:14:24 Generating JWE encryption key
	2024/09/20 20:14:24 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/20 20:14:24 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/20 20:14:25 Initializing JWE encryption key from synchronized object
	2024/09/20 20:14:25 Creating in-cluster Sidecar client
	2024/09/20 20:14:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:14:25 Serving insecurely on HTTP port: 9090
	2024/09/20 20:14:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:15:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:15:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:16:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:16:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:17:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:17:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:18:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:18:55 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/20 20:19:25 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [81118d4396158739b56e864be3df707098871bebabeeed0ea7ac25fa65540524] <==
	I0920 20:13:57.952979       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0920 20:14:27.954642       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [9aefdeb7d5aac5147087d3269954a3aa7e5a25d7014644dc09ef1f02a44e773e] <==
	I0920 20:14:40.679763       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0920 20:14:40.698412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0920 20:14:40.698636       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0920 20:14:58.183471       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0920 20:14:58.183716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-060703_5f6a4efe-1764-43c2-96fd-bd07e5343acc!
	I0920 20:14:58.184564       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"df70efc1-71c3-4b95-9600-93cb7059f494", APIVersion:"v1", ResourceVersion:"881", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-060703_5f6a4efe-1764-43c2-96fd-bd07e5343acc became leader
	I0920 20:14:58.284422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-060703_5f6a4efe-1764-43c2-96fd-bd07e5343acc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-060703 -n old-k8s-version-060703
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-060703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-rrgz6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-060703 describe pod metrics-server-9975d5f86-rrgz6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-060703 describe pod metrics-server-9975d5f86-rrgz6: exit status 1 (108.147011ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-rrgz6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-060703 describe pod metrics-server-9975d5f86-rrgz6: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (377.84s)

                                                
                                    

Test pass (298/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.71
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.5
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.21
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 219.76
31 TestAddons/serial/GCPAuth/Namespaces 0.19
33 TestAddons/parallel/Registry 17.26
34 TestAddons/parallel/Ingress 19.99
35 TestAddons/parallel/InspektorGadget 11.16
36 TestAddons/parallel/MetricsServer 6.9
38 TestAddons/parallel/CSI 69.93
39 TestAddons/parallel/Headlamp 16.01
40 TestAddons/parallel/CloudSpanner 5.64
41 TestAddons/parallel/LocalPath 8.95
42 TestAddons/parallel/NvidiaDevicePlugin 6.57
43 TestAddons/parallel/Yakd 11.88
44 TestAddons/StoppedEnableDisable 12.34
45 TestCertOptions 37.29
46 TestCertExpiration 229.5
48 TestForceSystemdFlag 37.38
49 TestForceSystemdEnv 43.88
50 TestDockerEnvContainerd 42.94
55 TestErrorSpam/setup 30.41
56 TestErrorSpam/start 0.82
57 TestErrorSpam/status 1.09
58 TestErrorSpam/pause 1.86
59 TestErrorSpam/unpause 1.88
60 TestErrorSpam/stop 1.52
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 49.24
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 6.78
67 TestFunctional/serial/KubeContext 0.07
68 TestFunctional/serial/KubectlGetPods 0.09
71 TestFunctional/serial/CacheCmd/cache/add_remote 4.1
72 TestFunctional/serial/CacheCmd/cache/add_local 1.31
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 40.52
81 TestFunctional/serial/ComponentHealth 0.11
82 TestFunctional/serial/LogsCmd 1.76
83 TestFunctional/serial/LogsFileCmd 1.76
84 TestFunctional/serial/InvalidService 4.74
86 TestFunctional/parallel/ConfigCmd 0.49
87 TestFunctional/parallel/DashboardCmd 9.86
88 TestFunctional/parallel/DryRun 0.61
89 TestFunctional/parallel/InternationalLanguage 0.25
90 TestFunctional/parallel/StatusCmd 1.18
94 TestFunctional/parallel/ServiceCmdConnect 10.76
95 TestFunctional/parallel/AddonsCmd 0.16
96 TestFunctional/parallel/PersistentVolumeClaim 28.02
98 TestFunctional/parallel/SSHCmd 0.65
99 TestFunctional/parallel/CpCmd 2.1
101 TestFunctional/parallel/FileSync 0.33
102 TestFunctional/parallel/CertSync 2.16
106 TestFunctional/parallel/NodeLabels 0.11
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
110 TestFunctional/parallel/License 0.29
111 TestFunctional/parallel/Version/short 0.06
112 TestFunctional/parallel/Version/components 1.29
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.33
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.72
118 TestFunctional/parallel/ImageCommands/Setup 0.68
119 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
120 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
121 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
122 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
123 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.5
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.65
126 TestFunctional/parallel/ProfileCmd/profile_list 0.5
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.52
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.87
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.43
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.76
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.59
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 7.26
144 TestFunctional/parallel/ServiceCmd/List 0.5
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.6
147 TestFunctional/parallel/ServiceCmd/Format 0.57
148 TestFunctional/parallel/MountCmd/any-port 7.88
149 TestFunctional/parallel/ServiceCmd/URL 0.43
150 TestFunctional/parallel/MountCmd/specific-port 2.2
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.31
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 121.24
159 TestMultiControlPlane/serial/DeployApp 33.19
160 TestMultiControlPlane/serial/PingHostFromPods 1.75
161 TestMultiControlPlane/serial/AddWorkerNode 24.32
162 TestMultiControlPlane/serial/NodeLabels 0.1
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.01
164 TestMultiControlPlane/serial/CopyFile 19.66
165 TestMultiControlPlane/serial/StopSecondaryNode 12.87
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
167 TestMultiControlPlane/serial/RestartSecondaryNode 18.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 153.35
170 TestMultiControlPlane/serial/DeleteSecondaryNode 10.86
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.77
172 TestMultiControlPlane/serial/StopCluster 36.23
173 TestMultiControlPlane/serial/RestartCluster 43.21
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.04
175 TestMultiControlPlane/serial/AddSecondaryNode 43.14
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
180 TestJSONOutput/start/Command 90.24
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.77
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.67
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.8
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.22
205 TestKicCustomNetwork/create_custom_network 41.22
206 TestKicCustomNetwork/use_default_bridge_network 31.59
207 TestKicExistingNetwork 33.61
208 TestKicCustomSubnet 34.06
209 TestKicStaticIP 34.88
210 TestMainNoArgs 0.13
211 TestMinikubeProfile 65.74
214 TestMountStart/serial/StartWithMountFirst 6.75
215 TestMountStart/serial/VerifyMountFirst 0.27
216 TestMountStart/serial/StartWithMountSecond 9.07
217 TestMountStart/serial/VerifyMountSecond 0.27
218 TestMountStart/serial/DeleteFirst 1.65
219 TestMountStart/serial/VerifyMountPostDelete 0.26
220 TestMountStart/serial/Stop 1.19
221 TestMountStart/serial/RestartStopped 7.73
222 TestMountStart/serial/VerifyMountPostStop 0.26
225 TestMultiNode/serial/FreshStart2Nodes 69.19
226 TestMultiNode/serial/DeployApp2Nodes 20.89
227 TestMultiNode/serial/PingHostFrom2Pods 1
228 TestMultiNode/serial/AddNode 16.7
229 TestMultiNode/serial/MultiNodeLabels 0.09
230 TestMultiNode/serial/ProfileList 0.71
231 TestMultiNode/serial/CopyFile 10.29
232 TestMultiNode/serial/StopNode 2.34
233 TestMultiNode/serial/StartAfterStop 9.76
234 TestMultiNode/serial/RestartKeepsNodes 81.11
235 TestMultiNode/serial/DeleteNode 5.26
236 TestMultiNode/serial/StopMultiNode 24.12
237 TestMultiNode/serial/RestartMultiNode 54.59
238 TestMultiNode/serial/ValidateNameConflict 33.81
243 TestPreload 113.73
245 TestScheduledStopUnix 109.29
248 TestInsufficientStorage 11.51
249 TestRunningBinaryUpgrade 85.52
251 TestKubernetesUpgrade 354.16
252 TestMissingContainerUpgrade 167.01
254 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
255 TestNoKubernetes/serial/StartWithK8s 40.52
256 TestNoKubernetes/serial/StartWithStopK8s 11.37
257 TestNoKubernetes/serial/Start 6.63
258 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
259 TestNoKubernetes/serial/ProfileList 0.98
260 TestNoKubernetes/serial/Stop 1.21
261 TestNoKubernetes/serial/StartNoArgs 6.39
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
263 TestStoppedBinaryUpgrade/Setup 0.82
264 TestStoppedBinaryUpgrade/Upgrade 116.02
265 TestStoppedBinaryUpgrade/MinikubeLogs 1.25
274 TestPause/serial/Start 95.73
282 TestNetworkPlugins/group/false 3.99
283 TestPause/serial/SecondStartNoReconfiguration 7.35
287 TestPause/serial/Pause 1.31
288 TestPause/serial/VerifyStatus 0.43
289 TestPause/serial/Unpause 1
290 TestPause/serial/PauseAgain 1.04
291 TestPause/serial/DeletePaused 3.2
292 TestPause/serial/VerifyDeletedResources 0.5
294 TestStartStop/group/old-k8s-version/serial/FirstStart 167.12
296 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 55.67
297 TestStartStop/group/old-k8s-version/serial/DeployApp 9.67
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.56
299 TestStartStop/group/old-k8s-version/serial/Stop 12.25
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
302 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
304 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.22
305 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
306 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 290.91
307 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
308 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
309 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
310 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.53
312 TestStartStop/group/embed-certs/serial/FirstStart 54.57
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
316 TestStartStop/group/old-k8s-version/serial/Pause 3.94
318 TestStartStop/group/no-preload/serial/FirstStart 62.88
319 TestStartStop/group/embed-certs/serial/DeployApp 10.42
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.57
321 TestStartStop/group/embed-certs/serial/Stop 12.29
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
323 TestStartStop/group/embed-certs/serial/SecondStart 280.31
324 TestStartStop/group/no-preload/serial/DeployApp 9.42
325 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.2
326 TestStartStop/group/no-preload/serial/Stop 12.14
327 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
328 TestStartStop/group/no-preload/serial/SecondStart 271.44
329 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
332 TestStartStop/group/embed-certs/serial/Pause 3.16
334 TestStartStop/group/newest-cni/serial/FirstStart 36.87
335 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
336 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
337 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
338 TestStartStop/group/no-preload/serial/Pause 4.64
339 TestStartStop/group/newest-cni/serial/DeployApp 0
340 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.76
341 TestStartStop/group/newest-cni/serial/Stop 1.72
342 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
343 TestStartStop/group/newest-cni/serial/SecondStart 21.83
344 TestNetworkPlugins/group/auto/Start 97.26
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.33
348 TestStartStop/group/newest-cni/serial/Pause 3.66
349 TestNetworkPlugins/group/kindnet/Start 55.91
350 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
351 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
352 TestNetworkPlugins/group/kindnet/NetCatPod 9.3
353 TestNetworkPlugins/group/auto/KubeletFlags 0.28
354 TestNetworkPlugins/group/auto/NetCatPod 9.33
355 TestNetworkPlugins/group/kindnet/DNS 0.27
356 TestNetworkPlugins/group/kindnet/Localhost 0.2
357 TestNetworkPlugins/group/kindnet/HairPin 0.19
358 TestNetworkPlugins/group/auto/DNS 0.29
359 TestNetworkPlugins/group/auto/Localhost 0.22
360 TestNetworkPlugins/group/auto/HairPin 0.23
361 TestNetworkPlugins/group/calico/Start 72.86
362 TestNetworkPlugins/group/custom-flannel/Start 54.9
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.29
365 TestNetworkPlugins/group/custom-flannel/DNS 0.24
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.41
370 TestNetworkPlugins/group/calico/NetCatPod 9.37
371 TestNetworkPlugins/group/calico/DNS 0.24
372 TestNetworkPlugins/group/calico/Localhost 0.19
373 TestNetworkPlugins/group/calico/HairPin 0.2
374 TestNetworkPlugins/group/enable-default-cni/Start 86.95
375 TestNetworkPlugins/group/flannel/Start 56.87
376 TestNetworkPlugins/group/flannel/ControllerPod 6.01
377 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
378 TestNetworkPlugins/group/flannel/NetCatPod 9.28
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
381 TestNetworkPlugins/group/flannel/DNS 0.21
382 TestNetworkPlugins/group/flannel/Localhost 0.15
383 TestNetworkPlugins/group/flannel/HairPin 0.16
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
387 TestNetworkPlugins/group/bridge/Start 42.1
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 9.25
390 TestNetworkPlugins/group/bridge/DNS 0.17
391 TestNetworkPlugins/group/bridge/Localhost 0.17
392 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (13.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-790946 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-790946 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.711370017s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0920 19:22:56.587752  739787 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
I0920 19:22:56.587841  739787 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-790946
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-790946: exit status 85 (68.951876ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-790946 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC |          |
	|         | -p download-only-790946        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:22:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:22:42.927008  739792 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:22:42.927168  739792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:42.927179  739792 out.go:358] Setting ErrFile to fd 2...
	I0920 19:22:42.927185  739792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:42.927427  739792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	W0920 19:22:42.927577  739792 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19678-734403/.minikube/config/config.json: open /home/jenkins/minikube-integration/19678-734403/.minikube/config/config.json: no such file or directory
	I0920 19:22:42.927994  739792 out.go:352] Setting JSON to true
	I0920 19:22:42.928928  739792 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11114,"bootTime":1726849049,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:22:42.929014  739792 start.go:139] virtualization:  
	I0920 19:22:42.932019  739792 out.go:97] [download-only-790946] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0920 19:22:42.932181  739792 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball: no such file or directory
	I0920 19:22:42.932232  739792 notify.go:220] Checking for updates...
	I0920 19:22:42.934289  739792 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:22:42.936428  739792 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:22:42.938449  739792 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:22:42.940285  739792 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 19:22:42.942138  739792 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:22:42.945762  739792 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:22:42.946009  739792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:22:42.971733  739792 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:22:42.971845  739792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:22:43.042641  739792 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:22:43.03187031 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:22:43.042761  739792 docker.go:318] overlay module found
	I0920 19:22:43.045170  739792 out.go:97] Using the docker driver based on user configuration
	I0920 19:22:43.045197  739792 start.go:297] selected driver: docker
	I0920 19:22:43.045208  739792 start.go:901] validating driver "docker" against <nil>
	I0920 19:22:43.045337  739792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:22:43.097065  739792 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:22:43.087367885 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:22:43.097300  739792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:22:43.097579  739792 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:22:43.097736  739792 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:22:43.100402  739792 out.go:169] Using Docker driver with root privileges
	I0920 19:22:43.102163  739792 cni.go:84] Creating CNI manager for ""
	I0920 19:22:43.102225  739792 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:22:43.102234  739792 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:22:43.102380  739792 start.go:340] cluster config:
	{Name:download-only-790946 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-790946 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:22:43.104417  739792 out.go:97] Starting "download-only-790946" primary control-plane node in "download-only-790946" cluster
	I0920 19:22:43.104442  739792 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 19:22:43.106320  739792 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:22:43.106357  739792 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 19:22:43.106509  739792 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:22:43.124712  739792 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:22:43.124951  739792 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:22:43.125076  739792 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:22:43.173282  739792 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 19:22:43.173330  739792 cache.go:56] Caching tarball of preloaded images
	I0920 19:22:43.173506  739792 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 19:22:43.176324  739792 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0920 19:22:43.176351  739792 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:22:43.262495  739792 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0920 19:22:47.924194  739792 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:22:47.924315  739792 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:22:49.034601  739792 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0920 19:22:49.034996  739792 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/download-only-790946/config.json ...
	I0920 19:22:49.035033  739792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/download-only-790946/config.json: {Name:mk63f34e0fb99ea30470d7b6026706316541beeb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:22:49.035226  739792 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0920 19:22:49.035427  739792 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19678-734403/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-790946 host does not exist
	  To start a cluster, run: "minikube start -p download-only-790946"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-790946
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-509320 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-509320 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.497569192s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0920 19:23:02.486440  739787 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
I0920 19:23:02.486483  739787 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-509320
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-509320: exit status 85 (84.234051ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-790946 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC |                     |
	|         | -p download-only-790946        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| delete  | -p download-only-790946        | download-only-790946 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC | 20 Sep 24 19:22 UTC |
	| start   | -o=json --download-only        | download-only-509320 | jenkins | v1.34.0 | 20 Sep 24 19:22 UTC |                     |
	|         | -p download-only-509320        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/20 19:22:57
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0920 19:22:57.037154  739994 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:22:57.037307  739994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:57.037318  739994 out.go:358] Setting ErrFile to fd 2...
	I0920 19:22:57.037323  739994 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:22:57.037560  739994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:22:57.037974  739994 out.go:352] Setting JSON to true
	I0920 19:22:57.038894  739994 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11128,"bootTime":1726849049,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:22:57.038975  739994 start.go:139] virtualization:  
	I0920 19:22:57.041908  739994 out.go:97] [download-only-509320] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:22:57.042143  739994 notify.go:220] Checking for updates...
	I0920 19:22:57.044124  739994 out.go:169] MINIKUBE_LOCATION=19678
	I0920 19:22:57.046064  739994 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:22:57.047971  739994 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:22:57.049725  739994 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 19:22:57.051759  739994 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0920 19:22:57.056373  739994 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0920 19:22:57.056644  739994 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:22:57.093228  739994 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:22:57.093359  739994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:22:57.155729  739994 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:22:57.145533724 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:22:57.155844  739994 docker.go:318] overlay module found
	I0920 19:22:57.158146  739994 out.go:97] Using the docker driver based on user configuration
	I0920 19:22:57.158182  739994 start.go:297] selected driver: docker
	I0920 19:22:57.158190  739994 start.go:901] validating driver "docker" against <nil>
	I0920 19:22:57.158297  739994 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:22:57.221876  739994 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-20 19:22:57.212492008 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:22:57.222085  739994 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0920 19:22:57.222552  739994 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0920 19:22:57.222717  739994 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0920 19:22:57.225448  739994 out.go:169] Using Docker driver with root privileges
	I0920 19:22:57.227421  739994 cni.go:84] Creating CNI manager for ""
	I0920 19:22:57.227493  739994 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0920 19:22:57.227508  739994 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0920 19:22:57.227610  739994 start.go:340] cluster config:
	{Name:download-only-509320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-509320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:22:57.230101  739994 out.go:97] Starting "download-only-509320" primary control-plane node in "download-only-509320" cluster
	I0920 19:22:57.230131  739994 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0920 19:22:57.232109  739994 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0920 19:22:57.232136  739994 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:22:57.232331  739994 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0920 19:22:57.247907  739994 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0920 19:22:57.248036  739994 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0920 19:22:57.248063  739994 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0920 19:22:57.248068  739994 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0920 19:22:57.248076  739994 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0920 19:22:57.294174  739994 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 19:22:57.294199  739994 cache.go:56] Caching tarball of preloaded images
	I0920 19:22:57.294957  739994 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:22:57.297221  739994 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0920 19:22:57.297250  739994 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:22:57.378162  739994 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0920 19:23:00.932240  739994 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:23:00.932372  739994 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19678-734403/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0920 19:23:01.791805  739994 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0920 19:23:01.792231  739994 profile.go:143] Saving config to /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/download-only-509320/config.json ...
	I0920 19:23:01.792267  739994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/download-only-509320/config.json: {Name:mk4d735e4d13e3a2827815511efd3df230810db9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0920 19:23:01.793047  739994 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0920 19:23:01.793207  739994 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19678-734403/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-509320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-509320
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0920 19:23:03.741103  739787 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-080065 --alsologtostderr --binary-mirror http://127.0.0.1:33183 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-080065" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-080065
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-388835
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-388835: exit status 85 (65.311159ms)

                                                
                                                
-- stdout --
	* Profile "addons-388835" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-388835"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-388835
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-388835: exit status 85 (79.90569ms)

                                                
                                                
-- stdout --
	* Profile "addons-388835" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-388835"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (219.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-388835 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-388835 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m39.756332275s)
--- PASS: TestAddons/Setup (219.76s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-388835 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-388835 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.995359ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-mr26g" [b31af5af-2dd0-483b-bb89-7be808c67c81] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.01295318s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-vt26f" [c4bffcb8-e224-4e7d-9149-e0a9c22d46f4] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003983727s
addons_test.go:338: (dbg) Run:  kubectl --context addons-388835 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-388835 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Done: kubectl --context addons-388835 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.131335804s)
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 ip
2024/09/20 19:30:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.26s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-388835 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-388835 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-388835 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [19f71938-3624-4e37-904c-eef3632bf363] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [19f71938-3624-4e37-904c-eef3632bf363] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003769117s
I0920 19:31:45.489363  739787 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-388835 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable ingress-dns --alsologtostderr -v=1: (1.248562387s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable ingress --alsologtostderr -v=1: (7.943872408s)
--- PASS: TestAddons/parallel/Ingress (19.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4njwd" [a5380331-5c07-4ad6-a2d8-b954f4e41e9e] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004578393s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-388835
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-388835: (6.156750483s)
--- PASS: TestAddons/parallel/InspektorGadget (11.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.9s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 3.052608ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-qpwkg" [6d2ee6cf-4892-473c-a648-0405b31eddf6] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003718866s
addons_test.go:413: (dbg) Run:  kubectl --context addons-388835 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (69.93s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0920 19:30:35.187393  739787 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0920 19:30:35.193033  739787 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0920 19:30:35.193062  739787 kapi.go:107] duration metric: took 5.681219ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 5.690663ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-388835 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-388835 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a6b204c7-c236-4ea8-9587-6c6a599dc28b] Pending
helpers_test.go:344: "task-pv-pod" [a6b204c7-c236-4ea8-9587-6c6a599dc28b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a6b204c7-c236-4ea8-9587-6c6a599dc28b] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003955263s
addons_test.go:528: (dbg) Run:  kubectl --context addons-388835 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388835 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-388835 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-388835 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-388835 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-388835 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-388835 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [e36a3146-0da1-4d80-a6dc-694a594421c7] Pending
helpers_test.go:344: "task-pv-pod-restore" [e36a3146-0da1-4d80-a6dc-694a594421c7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [e36a3146-0da1-4d80-a6dc-694a594421c7] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003835006s
addons_test.go:570: (dbg) Run:  kubectl --context addons-388835 delete pod task-pv-pod-restore
addons_test.go:570: (dbg) Done: kubectl --context addons-388835 delete pod task-pv-pod-restore: (1.173302841s)
addons_test.go:574: (dbg) Run:  kubectl --context addons-388835 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-388835 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.942353995s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:586: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable volumesnapshots --alsologtostderr -v=1: (1.036102373s)
--- PASS: TestAddons/parallel/CSI (69.93s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-388835 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-388835 --alsologtostderr -v=1: (1.129254011s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-8bdbg" [a90aa800-f86c-4741-b020-b64f0bc08cf9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-8bdbg" [a90aa800-f86c-4741-b020-b64f0bc08cf9] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.00364298s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable headlamp --alsologtostderr -v=1: (5.871091361s)
--- PASS: TestAddons/parallel/Headlamp (16.01s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-d77bb" [b3a36094-e0c5-4c6a-a95b-23bf06d3e31b] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003973942s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-388835
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.95s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-388835 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-388835 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [17876e97-2c8e-425b-9a92-672401f89201] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [17876e97-2c8e-425b-9a92-672401f89201] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [17876e97-2c8e-425b-9a92-672401f89201] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004371492s
addons_test.go:938: (dbg) Run:  kubectl --context addons-388835 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 ssh "cat /opt/local-path-provisioner/pvc-12efecdc-a4e5-4474-87d7-5397286392c0_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-388835 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-388835 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.95s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-pst9m" [a63a2067-c0d8-4755-bc26-33ef9f8e8c7d] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004029288s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-388835
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-j5kc9" [41735364-faf0-4471-9cbd-6e979bca4120] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003409771s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-388835 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-388835 addons disable yakd --alsologtostderr -v=1: (5.875420644s)
--- PASS: TestAddons/parallel/Yakd (11.88s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-388835
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-388835: (12.056757086s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-388835
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-388835
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-388835
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (37.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-485064 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-485064 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.626124534s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-485064 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-485064 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-485064 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-485064" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-485064
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-485064: (1.986753692s)
--- PASS: TestCertOptions (37.29s)

                                                
                                    
x
+
TestCertExpiration (229.5s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-284358 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-284358 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (39.576222969s)
E0920 20:09:47.244929  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-284358 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-284358 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.499751766s)
helpers_test.go:175: Cleaning up "cert-expiration-284358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-284358
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-284358: (2.426405941s)
--- PASS: TestCertExpiration (229.50s)

                                                
                                    
x
+
TestForceSystemdFlag (37.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-922343 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-922343 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.030700902s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-922343 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-922343" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-922343
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-922343: (2.034106868s)
--- PASS: TestForceSystemdFlag (37.38s)

                                                
                                    
x
+
TestForceSystemdEnv (43.88s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-169165 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-169165 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.579768221s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-169165 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-169165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-169165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-169165: (2.827821576s)
--- PASS: TestForceSystemdEnv (43.88s)

                                                
                                    
x
+
TestDockerEnvContainerd (42.94s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-156209 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-156209 --driver=docker  --container-runtime=containerd: (26.912293054s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-156209"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-156209": (1.018268382s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-RCvTi89jrLpH/agent.758694" SSH_AGENT_PID="758695" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-RCvTi89jrLpH/agent.758694" SSH_AGENT_PID="758695" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-RCvTi89jrLpH/agent.758694" SSH_AGENT_PID="758695" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.30438126s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-RCvTi89jrLpH/agent.758694" SSH_AGENT_PID="758695" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-156209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-156209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-156209: (2.33250825s)
--- PASS: TestDockerEnvContainerd (42.94s)

                                                
                                    
x
+
TestErrorSpam/setup (30.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-805660 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-805660 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-805660 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-805660 --driver=docker  --container-runtime=containerd: (30.41255587s)
--- PASS: TestErrorSpam/setup (30.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 stop: (1.319766872s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-805660 --log_dir /tmp/nospam-805660 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19678-734403/.minikube/files/etc/test/nested/copy/739787/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.24s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-353629 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (49.240972018s)
--- PASS: TestFunctional/serial/StartWithProxy (49.24s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.78s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0920 19:34:28.241811  739787 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-353629 --alsologtostderr -v=8: (6.783090823s)
functional_test.go:663: soft start took 6.783648179s for "functional-353629" cluster.
I0920 19:34:35.025211  739787 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (6.78s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-353629 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:3.1: (1.506709094s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:3.3: (1.477289791s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 cache add registry.k8s.io/pause:latest: (1.111376522s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-353629 /tmp/TestFunctionalserialCacheCmdcacheadd_local258247516/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache add minikube-local-cache-test:functional-353629
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache delete minikube-local-cache-test:functional-353629
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-353629
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.132899ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 cache reload: (1.065095773s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 kubectl -- --context functional-353629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-353629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.52s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-353629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.518013176s)
functional_test.go:761: restart took 40.518106682s for "functional-353629" cluster.
I0920 19:35:23.898734  739787 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (40.52s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-353629 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 logs: (1.760915663s)
--- PASS: TestFunctional/serial/LogsCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 logs --file /tmp/TestFunctionalserialLogsFileCmd2677454521/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 logs --file /tmp/TestFunctionalserialLogsFileCmd2677454521/001/logs.txt: (1.759723774s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.74s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-353629 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-353629
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-353629: exit status 115 (436.985928ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31481 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-353629 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-353629 delete -f testdata/invalidsvc.yaml: (1.033669189s)
--- PASS: TestFunctional/serial/InvalidService (4.74s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 config get cpus: exit status 14 (83.854815ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 config get cpus: exit status 14 (75.294188ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-353629 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-353629 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 774535: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.86s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-353629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (284.841129ms)

                                                
                                                
-- stdout --
	* [functional-353629] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:36:09.866509  773475 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:36:09.866667  773475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:09.866678  773475 out.go:358] Setting ErrFile to fd 2...
	I0920 19:36:09.866683  773475 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:09.866912  773475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:36:09.867291  773475 out.go:352] Setting JSON to false
	I0920 19:36:09.868353  773475 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11921,"bootTime":1726849049,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:36:09.868481  773475 start.go:139] virtualization:  
	I0920 19:36:09.871512  773475 out.go:177] * [functional-353629] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 19:36:09.873435  773475 notify.go:220] Checking for updates...
	I0920 19:36:09.873405  773475 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:36:09.875747  773475 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:36:09.877621  773475 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:36:09.879912  773475 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 19:36:09.885417  773475 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:36:09.887547  773475 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:36:09.891216  773475 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:36:09.891753  773475 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:36:09.931761  773475 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:36:09.931884  773475 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:36:10.032849  773475 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:36:09.989505918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:36:10.032966  773475 docker.go:318] overlay module found
	I0920 19:36:10.036731  773475 out.go:177] * Using the docker driver based on existing profile
	I0920 19:36:10.038757  773475 start.go:297] selected driver: docker
	I0920 19:36:10.038800  773475 start.go:901] validating driver "docker" against &{Name:functional-353629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-353629 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:36:10.038931  773475 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:36:10.041605  773475 out.go:201] 
	W0920 19:36:10.043747  773475 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0920 19:36:10.045487  773475 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-353629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-353629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (247.713348ms)

                                                
                                                
-- stdout --
	* [functional-353629] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:36:12.539715  774245 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:36:12.539927  774245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:12.539959  774245 out.go:358] Setting ErrFile to fd 2...
	I0920 19:36:12.539984  774245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:36:12.543318  774245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:36:12.543788  774245 out.go:352] Setting JSON to false
	I0920 19:36:12.544724  774245 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":11924,"bootTime":1726849049,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 19:36:12.544800  774245 start.go:139] virtualization:  
	I0920 19:36:12.547357  774245 out.go:177] * [functional-353629] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0920 19:36:12.550854  774245 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 19:36:12.551080  774245 notify.go:220] Checking for updates...
	I0920 19:36:12.555480  774245 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 19:36:12.558452  774245 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 19:36:12.561221  774245 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 19:36:12.563121  774245 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 19:36:12.564960  774245 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 19:36:12.567596  774245 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:36:12.568116  774245 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 19:36:12.616959  774245 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 19:36:12.617088  774245 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:36:12.688012  774245 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-20 19:36:12.676194029 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:36:12.688152  774245 docker.go:318] overlay module found
	I0920 19:36:12.690360  774245 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0920 19:36:12.692243  774245 start.go:297] selected driver: docker
	I0920 19:36:12.692256  774245 start.go:901] validating driver "docker" against &{Name:functional-353629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-353629 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0920 19:36:12.692376  774245 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 19:36:12.694729  774245 out.go:201] 
	W0920 19:36:12.696494  774245 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0920 19:36:12.698212  774245 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-353629 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-353629 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-zbnnh" [113b772e-e065-49de-a941-08ca7277589b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-zbnnh" [113b772e-e065-49de-a941-08ca7277589b] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004596901s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32688
functional_test.go:1675: http://192.168.49.2:32688: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-zbnnh

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32688
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3de6afb9-1dca-4bf2-94f2-c4843128548c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004363905s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-353629 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-353629 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-353629 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-353629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8a293024-2450-4ffa-8f8e-ef91de924ed2] Pending
helpers_test.go:344: "sp-pod" [8a293024-2450-4ffa-8f8e-ef91de924ed2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8a293024-2450-4ffa-8f8e-ef91de924ed2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.013301846s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-353629 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-353629 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-353629 delete -f testdata/storage-provisioner/pod.yaml: (1.727655963s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-353629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e666a20a-9796-4931-9296-ced9fbdf19c1] Pending
helpers_test.go:344: "sp-pod" [e666a20a-9796-4931-9296-ced9fbdf19c1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003128735s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-353629 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.02s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh -n functional-353629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cp functional-353629:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1743242634/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh -n functional-353629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh -n functional-353629 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/739787/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /etc/test/nested/copy/739787/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/739787.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /etc/ssl/certs/739787.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/739787.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /usr/share/ca-certificates/739787.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/7397872.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /etc/ssl/certs/7397872.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/7397872.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /usr/share/ca-certificates/7397872.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-353629 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "sudo systemctl is-active docker": exit status 1 (371.822626ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "sudo systemctl is-active crio": exit status 1 (366.447556ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 version -o=json --components: (1.2876672s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-353629 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-353629
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-353629
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-353629 image ls --format short --alsologtostderr:
I0920 19:36:23.376130  776020 out.go:345] Setting OutFile to fd 1 ...
I0920 19:36:23.376373  776020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:23.376404  776020 out.go:358] Setting ErrFile to fd 2...
I0920 19:36:23.376426  776020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:23.376672  776020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
I0920 19:36:23.377377  776020 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:23.377537  776020 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:23.378052  776020 cli_runner.go:164] Run: docker container inspect functional-353629 --format={{.State.Status}}
I0920 19:36:23.397009  776020 ssh_runner.go:195] Run: systemctl --version
I0920 19:36:23.397061  776020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-353629
I0920 19:36:23.417997  776020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/functional-353629/id_rsa Username:docker}
I0920 19:36:23.518852  776020 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-353629 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-353629  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/library/minikube-local-cache-test | functional-353629  | sha256:76f2bc | 992B   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-353629 image ls --format table --alsologtostderr:
I0920 19:36:24.306267  776220 out.go:345] Setting OutFile to fd 1 ...
I0920 19:36:24.306410  776220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.306419  776220 out.go:358] Setting ErrFile to fd 2...
I0920 19:36:24.306425  776220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.306697  776220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
I0920 19:36:24.307352  776220 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.307470  776220 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.307966  776220 cli_runner.go:164] Run: docker container inspect functional-353629 --format={{.State.Status}}
I0920 19:36:24.326841  776220 ssh_runner.go:195] Run: systemctl --version
I0920 19:36:24.326908  776220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-353629
I0920 19:36:24.347504  776220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/functional-353629/id_rsa Username:docker}
I0920 19:36:24.465181  776220 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-353629 image ls --format json --alsologtostderr:
[{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef6
1fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc5
50d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:76f2bce14a310963b7811680d0dd208933307105
fbc90cc38b23f9fd1e58fd10","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-353629"],"size":"992"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-353629"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"rep
oTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-353629 image ls --format json --alsologtostderr:
I0920 19:36:24.019664  776135 out.go:345] Setting OutFile to fd 1 ...
I0920 19:36:24.019831  776135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.019841  776135 out.go:358] Setting ErrFile to fd 2...
I0920 19:36:24.019847  776135 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.020135  776135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
I0920 19:36:24.020900  776135 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.021035  776135 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.021541  776135 cli_runner.go:164] Run: docker container inspect functional-353629 --format={{.State.Status}}
I0920 19:36:24.052263  776135 ssh_runner.go:195] Run: systemctl --version
I0920 19:36:24.052319  776135 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-353629
I0920 19:36:24.073112  776135 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/functional-353629/id_rsa Username:docker}
I0920 19:36:24.179459  776135 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-353629 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:76f2bce14a310963b7811680d0dd208933307105fbc90cc38b23f9fd1e58fd10
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-353629
size: "992"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-353629
size: "2173567"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-353629 image ls --format yaml --alsologtostderr:
I0920 19:36:23.618068  776064 out.go:345] Setting OutFile to fd 1 ...
I0920 19:36:23.618282  776064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:23.618296  776064 out.go:358] Setting ErrFile to fd 2...
I0920 19:36:23.618352  776064 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:23.618660  776064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
I0920 19:36:23.619586  776064 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:23.620614  776064 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:23.621343  776064 cli_runner.go:164] Run: docker container inspect functional-353629 --format={{.State.Status}}
I0920 19:36:23.639226  776064 ssh_runner.go:195] Run: systemctl --version
I0920 19:36:23.639287  776064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-353629
I0920 19:36:23.657549  776064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/functional-353629/id_rsa Username:docker}
I0920 19:36:23.755074  776064 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh pgrep buildkitd: exit status 1 (333.869793ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image build -t localhost/my-image:functional-353629 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 image build -t localhost/my-image:functional-353629 testdata/build --alsologtostderr: (3.145908956s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-353629 image build -t localhost/my-image:functional-353629 testdata/build --alsologtostderr:
I0920 19:36:24.194977  776192 out.go:345] Setting OutFile to fd 1 ...
I0920 19:36:24.195810  776192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.195860  776192 out.go:358] Setting ErrFile to fd 2...
I0920 19:36:24.195883  776192 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0920 19:36:24.196192  776192 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
I0920 19:36:24.196985  776192 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.197745  776192 config.go:182] Loaded profile config "functional-353629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0920 19:36:24.198451  776192 cli_runner.go:164] Run: docker container inspect functional-353629 --format={{.State.Status}}
I0920 19:36:24.217535  776192 ssh_runner.go:195] Run: systemctl --version
I0920 19:36:24.217596  776192 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-353629
I0920 19:36:24.249219  776192 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/functional-353629/id_rsa Username:docker}
I0920 19:36:24.348631  776192 build_images.go:161] Building image from path: /tmp/build.349477391.tar
I0920 19:36:24.348694  776192 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0920 19:36:24.365229  776192 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.349477391.tar
I0920 19:36:24.369901  776192 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.349477391.tar: stat -c "%s %y" /var/lib/minikube/build/build.349477391.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.349477391.tar': No such file or directory
I0920 19:36:24.369933  776192 ssh_runner.go:362] scp /tmp/build.349477391.tar --> /var/lib/minikube/build/build.349477391.tar (3072 bytes)
I0920 19:36:24.407067  776192 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.349477391
I0920 19:36:24.417056  776192 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.349477391 -xf /var/lib/minikube/build/build.349477391.tar
I0920 19:36:24.427490  776192 containerd.go:394] Building image: /var/lib/minikube/build/build.349477391
I0920 19:36:24.427646  776192 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.349477391 --local dockerfile=/var/lib/minikube/build/build.349477391 --output type=image,name=localhost/my-image:functional-353629
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:a27d1e8237eb9e4cb62ddbaeace159cd7e32617c963905d27a55533cc08ac866
#8 exporting manifest sha256:a27d1e8237eb9e4cb62ddbaeace159cd7e32617c963905d27a55533cc08ac866 0.0s done
#8 exporting config sha256:4e73825f75cff1811c14dc1cc5f8312aaf898a9959bd8414e0b5b9af3b076b18 0.0s done
#8 naming to localhost/my-image:functional-353629 done
#8 DONE 0.1s
I0920 19:36:27.254463  776192 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.349477391 --local dockerfile=/var/lib/minikube/build/build.349477391 --output type=image,name=localhost/my-image:functional-353629: (2.826772667s)
I0920 19:36:27.254538  776192 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.349477391
I0920 19:36:27.264376  776192 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.349477391.tar
I0920 19:36:27.275582  776192 build_images.go:217] Built localhost/my-image:functional-353629 from /tmp/build.349477391.tar
I0920 19:36:27.275616  776192 build_images.go:133] succeeded building to: functional-353629
I0920 19:36:27.275622  776192 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-353629
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr: (1.166589656s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr: (1.18722741s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-353629
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-353629 image load --daemon kicbase/echo-server:functional-353629 --alsologtostderr: (1.07526624s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "412.181477ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "83.641203ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "430.81745ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "84.75571ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image save kicbase/echo-server:functional-353629 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 771855: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image rm kicbase/echo-server:functional-353629 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-353629 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [4f0dea34-84ec-4175-bc3a-3c9bdb498536] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [4f0dea34-84ec-4175-bc3a-3c9bdb498536] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004569327s
I0920 19:35:50.265471  739787 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-353629
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 image save --daemon kicbase/echo-server:functional-353629 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-353629
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-353629 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.37.15 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-353629 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-353629 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-353629 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-pjf8t" [aafb4f14-6fa1-4088-aa72-311d097ea9a8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-pjf8t" [aafb4f14-6fa1-4088-aa72-311d097ea9a8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004027685s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service list -o json
functional_test.go:1494: Took "506.583195ms" to run "out/minikube-linux-arm64 -p functional-353629 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30652
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdany-port3774044241/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726860970394107518" to /tmp/TestFunctionalparallelMountCmdany-port3774044241/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726860970394107518" to /tmp/TestFunctionalparallelMountCmdany-port3774044241/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726860970394107518" to /tmp/TestFunctionalparallelMountCmdany-port3774044241/001/test-1726860970394107518
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (472.813786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:36:10.867482  739787 retry.go:31] will retry after 743.173812ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 20 19:36 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 20 19:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 20 19:36 test-1726860970394107518
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh cat /mount-9p/test-1726860970394107518
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-353629 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [44fe3c0c-061c-40ca-854d-6ec54bb9b3b3] Pending
helpers_test.go:344: "busybox-mount" [44fe3c0c-061c-40ca-854d-6ec54bb9b3b3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [44fe3c0c-061c-40ca-854d-6ec54bb9b3b3] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [44fe3c0c-061c-40ca-854d-6ec54bb9b3b3] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003864337s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-353629 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdany-port3774044241/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30652
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdspecific-port2831241367/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (551.303631ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:36:18.817397  739787 retry.go:31] will retry after 288.738388ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdspecific-port2831241367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "sudo umount -f /mount-9p": exit status 1 (412.752629ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-353629 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdspecific-port2831241367/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T" /mount1: exit status 1 (800.711595ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0920 19:36:21.270205  739787 retry.go:31] will retry after 595.575572ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T" /mount2
2024/09/20 19:36:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-353629 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-353629 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-353629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2027638040/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-353629
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-353629
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-353629
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-423136 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 19:36:44.178532  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.185026  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.196407  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.217800  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.259165  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.340581  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.502498  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:44.824162  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:45.465992  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:46.747349  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:49.308836  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:36:54.430169  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:37:04.672120  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:37:25.153577  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:38:06.116381  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-423136 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m0.409971976s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-423136 -- rollout status deployment/busybox: (30.286299256s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-b6gl9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-vhtxv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-zh6j9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-b6gl9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-vhtxv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-zh6j9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-b6gl9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-vhtxv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-zh6j9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-b6gl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-b6gl9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-vhtxv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-vhtxv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-zh6j9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-423136 -- exec busybox-7dff88458-zh6j9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-423136 -v=7 --alsologtostderr
E0920 19:39:28.038107  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-423136 -v=7 --alsologtostderr: (23.162370177s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr: (1.160793044s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-423136 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.013563423s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 status --output json -v=7 --alsologtostderr: (1.020731923s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp testdata/cp-test.txt ha-423136:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1754898348/001/cp-test_ha-423136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136:/home/docker/cp-test.txt ha-423136-m02:/home/docker/cp-test_ha-423136_ha-423136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test_ha-423136_ha-423136-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136:/home/docker/cp-test.txt ha-423136-m03:/home/docker/cp-test_ha-423136_ha-423136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test_ha-423136_ha-423136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136:/home/docker/cp-test.txt ha-423136-m04:/home/docker/cp-test_ha-423136_ha-423136-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test_ha-423136_ha-423136-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp testdata/cp-test.txt ha-423136-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1754898348/001/cp-test_ha-423136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m02:/home/docker/cp-test.txt ha-423136:/home/docker/cp-test_ha-423136-m02_ha-423136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test_ha-423136-m02_ha-423136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m02:/home/docker/cp-test.txt ha-423136-m03:/home/docker/cp-test_ha-423136-m02_ha-423136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test_ha-423136-m02_ha-423136-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m02:/home/docker/cp-test.txt ha-423136-m04:/home/docker/cp-test_ha-423136-m02_ha-423136-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test_ha-423136-m02_ha-423136-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp testdata/cp-test.txt ha-423136-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1754898348/001/cp-test_ha-423136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m03:/home/docker/cp-test.txt ha-423136:/home/docker/cp-test_ha-423136-m03_ha-423136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test_ha-423136-m03_ha-423136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m03:/home/docker/cp-test.txt ha-423136-m02:/home/docker/cp-test_ha-423136-m03_ha-423136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test_ha-423136-m03_ha-423136-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m03:/home/docker/cp-test.txt ha-423136-m04:/home/docker/cp-test_ha-423136-m03_ha-423136-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test_ha-423136-m03_ha-423136-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp testdata/cp-test.txt ha-423136-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1754898348/001/cp-test_ha-423136-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m04:/home/docker/cp-test.txt ha-423136:/home/docker/cp-test_ha-423136-m04_ha-423136.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136 "sudo cat /home/docker/cp-test_ha-423136-m04_ha-423136.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m04:/home/docker/cp-test.txt ha-423136-m02:/home/docker/cp-test_ha-423136-m04_ha-423136-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m02 "sudo cat /home/docker/cp-test_ha-423136-m04_ha-423136-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 cp ha-423136-m04:/home/docker/cp-test.txt ha-423136-m03:/home/docker/cp-test_ha-423136-m04_ha-423136-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 ssh -n ha-423136-m03 "sudo cat /home/docker/cp-test_ha-423136-m04_ha-423136-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 node stop m02 -v=7 --alsologtostderr: (12.08954465s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr: exit status 7 (783.002168ms)

                                                
                                                
-- stdout --
	ha-423136
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423136-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423136-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-423136-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:40:03.829736  792470 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:40:03.829963  792470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:03.829976  792470 out.go:358] Setting ErrFile to fd 2...
	I0920 19:40:03.829981  792470 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:40:03.830241  792470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:40:03.832800  792470 out.go:352] Setting JSON to false
	I0920 19:40:03.832866  792470 mustload.go:65] Loading cluster: ha-423136
	I0920 19:40:03.832938  792470 notify.go:220] Checking for updates...
	I0920 19:40:03.833979  792470 config.go:182] Loaded profile config "ha-423136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:40:03.834010  792470 status.go:174] checking status of ha-423136 ...
	I0920 19:40:03.834690  792470 cli_runner.go:164] Run: docker container inspect ha-423136 --format={{.State.Status}}
	I0920 19:40:03.857876  792470 status.go:364] ha-423136 host status = "Running" (err=<nil>)
	I0920 19:40:03.857904  792470 host.go:66] Checking if "ha-423136" exists ...
	I0920 19:40:03.858222  792470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423136
	I0920 19:40:03.879283  792470 host.go:66] Checking if "ha-423136" exists ...
	I0920 19:40:03.879618  792470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:40:03.879685  792470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423136
	I0920 19:40:03.905017  792470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/ha-423136/id_rsa Username:docker}
	I0920 19:40:04.008417  792470 ssh_runner.go:195] Run: systemctl --version
	I0920 19:40:04.013611  792470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:40:04.027250  792470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:40:04.087076  792470 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-20 19:40:04.07601413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:40:04.087841  792470 kubeconfig.go:125] found "ha-423136" server: "https://192.168.49.254:8443"
	I0920 19:40:04.087895  792470 api_server.go:166] Checking apiserver status ...
	I0920 19:40:04.087944  792470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:40:04.101437  792470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	I0920 19:40:04.111617  792470 api_server.go:182] apiserver freezer: "5:freezer:/docker/1c7c9540d1a94636e18749aa8e8e0c1642c9c1dddae45431bae796cfa1355e5e/kubepods/burstable/podab340a5362887e0a998d37aa9d89d6b1/cf7e0cf9cb4c530ec8ca22f91a2a3a47237676a34c481c19766551424cba6143"
	I0920 19:40:04.111711  792470 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1c7c9540d1a94636e18749aa8e8e0c1642c9c1dddae45431bae796cfa1355e5e/kubepods/burstable/podab340a5362887e0a998d37aa9d89d6b1/cf7e0cf9cb4c530ec8ca22f91a2a3a47237676a34c481c19766551424cba6143/freezer.state
	I0920 19:40:04.122242  792470 api_server.go:204] freezer state: "THAWED"
	I0920 19:40:04.122276  792470 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:40:04.130603  792470 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:40:04.130638  792470 status.go:456] ha-423136 apiserver status = Running (err=<nil>)
	I0920 19:40:04.130653  792470 status.go:176] ha-423136 status: &{Name:ha-423136 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:40:04.130672  792470 status.go:174] checking status of ha-423136-m02 ...
	I0920 19:40:04.130998  792470 cli_runner.go:164] Run: docker container inspect ha-423136-m02 --format={{.State.Status}}
	I0920 19:40:04.150630  792470 status.go:364] ha-423136-m02 host status = "Stopped" (err=<nil>)
	I0920 19:40:04.150657  792470 status.go:377] host is not running, skipping remaining checks
	I0920 19:40:04.150665  792470 status.go:176] ha-423136-m02 status: &{Name:ha-423136-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:40:04.150687  792470 status.go:174] checking status of ha-423136-m03 ...
	I0920 19:40:04.151006  792470 cli_runner.go:164] Run: docker container inspect ha-423136-m03 --format={{.State.Status}}
	I0920 19:40:04.173747  792470 status.go:364] ha-423136-m03 host status = "Running" (err=<nil>)
	I0920 19:40:04.173779  792470 host.go:66] Checking if "ha-423136-m03" exists ...
	I0920 19:40:04.174132  792470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423136-m03
	I0920 19:40:04.197380  792470 host.go:66] Checking if "ha-423136-m03" exists ...
	I0920 19:40:04.197703  792470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:40:04.197743  792470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423136-m03
	I0920 19:40:04.220157  792470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/ha-423136-m03/id_rsa Username:docker}
	I0920 19:40:04.320752  792470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:40:04.334456  792470 kubeconfig.go:125] found "ha-423136" server: "https://192.168.49.254:8443"
	I0920 19:40:04.334489  792470 api_server.go:166] Checking apiserver status ...
	I0920 19:40:04.334534  792470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:40:04.345981  792470 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1340/cgroup
	I0920 19:40:04.358620  792470 api_server.go:182] apiserver freezer: "5:freezer:/docker/e83e197a980711773f9976850bfc14b09b66358f91ac7d9889d560b0ee027d30/kubepods/burstable/pod30e8bf7537e4532f1e2b1d7dd7136436/e92d66100efdcce8760ded0839c7758d9270226841ae4a99c453893f1aa52fb8"
	I0920 19:40:04.358742  792470 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e83e197a980711773f9976850bfc14b09b66358f91ac7d9889d560b0ee027d30/kubepods/burstable/pod30e8bf7537e4532f1e2b1d7dd7136436/e92d66100efdcce8760ded0839c7758d9270226841ae4a99c453893f1aa52fb8/freezer.state
	I0920 19:40:04.368027  792470 api_server.go:204] freezer state: "THAWED"
	I0920 19:40:04.368078  792470 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0920 19:40:04.377212  792470 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0920 19:40:04.377244  792470 status.go:456] ha-423136-m03 apiserver status = Running (err=<nil>)
	I0920 19:40:04.377254  792470 status.go:176] ha-423136-m03 status: &{Name:ha-423136-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:40:04.377296  792470 status.go:174] checking status of ha-423136-m04 ...
	I0920 19:40:04.377634  792470 cli_runner.go:164] Run: docker container inspect ha-423136-m04 --format={{.State.Status}}
	I0920 19:40:04.396354  792470 status.go:364] ha-423136-m04 host status = "Running" (err=<nil>)
	I0920 19:40:04.396382  792470 host.go:66] Checking if "ha-423136-m04" exists ...
	I0920 19:40:04.396688  792470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-423136-m04
	I0920 19:40:04.418447  792470 host.go:66] Checking if "ha-423136-m04" exists ...
	I0920 19:40:04.418772  792470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:40:04.418824  792470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-423136-m04
	I0920 19:40:04.438224  792470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/ha-423136-m04/id_rsa Username:docker}
	I0920 19:40:04.536250  792470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:40:04.552820  792470 status.go:176] ha-423136-m04 status: &{Name:ha-423136-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 node start m02 -v=7 --alsologtostderr: (16.958544124s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr: (1.00870997s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.029114848s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (153.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-423136 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-423136 -v=7 --alsologtostderr
E0920 19:40:39.839758  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:39.846258  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:39.857729  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:39.879190  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:39.920566  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:40.003038  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:40.164537  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:40.486475  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:41.128027  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:42.409629  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:44.971030  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:40:50.093030  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:41:00.334553  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-423136 -v=7 --alsologtostderr: (37.183720308s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-423136 --wait=true -v=7 --alsologtostderr
E0920 19:41:20.815937  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:41:44.178472  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:42:01.777677  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:42:11.880276  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-423136 --wait=true -v=7 --alsologtostderr: (1m55.980430862s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-423136
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (153.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 node delete m03 -v=7 --alsologtostderr: (9.823903958s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 stop -v=7 --alsologtostderr
E0920 19:43:23.699188  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 stop -v=7 --alsologtostderr: (36.104801302s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr: exit status 7 (120.207847ms)

                                                
                                                
-- stdout --
	ha-423136
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423136-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-423136-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:43:45.627792  806943 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:43:45.627954  806943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:43:45.627966  806943 out.go:358] Setting ErrFile to fd 2...
	I0920 19:43:45.627971  806943 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:43:45.628223  806943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:43:45.628444  806943 out.go:352] Setting JSON to false
	I0920 19:43:45.628499  806943 mustload.go:65] Loading cluster: ha-423136
	I0920 19:43:45.628609  806943 notify.go:220] Checking for updates...
	I0920 19:43:45.628956  806943 config.go:182] Loaded profile config "ha-423136": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:43:45.628971  806943 status.go:174] checking status of ha-423136 ...
	I0920 19:43:45.629842  806943 cli_runner.go:164] Run: docker container inspect ha-423136 --format={{.State.Status}}
	I0920 19:43:45.648537  806943 status.go:364] ha-423136 host status = "Stopped" (err=<nil>)
	I0920 19:43:45.648559  806943 status.go:377] host is not running, skipping remaining checks
	I0920 19:43:45.648567  806943 status.go:176] ha-423136 status: &{Name:ha-423136 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:43:45.648615  806943 status.go:174] checking status of ha-423136-m02 ...
	I0920 19:43:45.648937  806943 cli_runner.go:164] Run: docker container inspect ha-423136-m02 --format={{.State.Status}}
	I0920 19:43:45.674605  806943 status.go:364] ha-423136-m02 host status = "Stopped" (err=<nil>)
	I0920 19:43:45.674630  806943 status.go:377] host is not running, skipping remaining checks
	I0920 19:43:45.674639  806943 status.go:176] ha-423136-m02 status: &{Name:ha-423136-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:43:45.674672  806943 status.go:174] checking status of ha-423136-m04 ...
	I0920 19:43:45.674992  806943 cli_runner.go:164] Run: docker container inspect ha-423136-m04 --format={{.State.Status}}
	I0920 19:43:45.691230  806943 status.go:364] ha-423136-m04 host status = "Stopped" (err=<nil>)
	I0920 19:43:45.691256  806943 status.go:377] host is not running, skipping remaining checks
	I0920 19:43:45.691264  806943 status.go:176] ha-423136-m04 status: &{Name:ha-423136-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (43.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-423136 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-423136 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (42.131627896s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (43.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.036038752s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-423136 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-423136 --control-plane -v=7 --alsologtostderr: (42.066678887s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-423136 status -v=7 --alsologtostderr: (1.073926236s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.028289324s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (90.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-187165 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0920 19:45:39.839212  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:46:07.541951  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:46:44.178749  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-187165 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m30.239223752s)
--- PASS: TestJSONOutput/start/Command (90.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-187165 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-187165 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.8s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-187165 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-187165 --output=json --user=testUser: (5.802619455s)
--- PASS: TestJSONOutput/stop/Command (5.80s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-947255 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-947255 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.939073ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8a1805a3-88b7-42fe-87c5-cec4ef634862","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-947255] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"343c790b-6c11-4e98-8040-3c8bef3d7d27","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"8f779167-c68e-4228-943d-cf37da705006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bff32253-de9d-428d-a54c-a5dc5ff63665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig"}}
	{"specversion":"1.0","id":"42f28362-8074-457d-8a3f-e064d079c46d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube"}}
	{"specversion":"1.0","id":"4b62a444-9dd5-43c5-b898-f39475acba1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"de96c8c6-9730-4b45-861f-9feec4c18eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"664cbba3-e7a5-4f25-a658-0996250fa6cb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-947255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-947255
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-709051 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-709051 --network=: (39.051488122s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-709051" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-709051
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-709051: (2.144104399s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (31.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-803557 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-803557 --network=bridge: (29.635380157s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-803557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-803557
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-803557: (1.930674723s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (31.59s)

                                                
                                    
x
+
TestKicExistingNetwork (33.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0920 19:48:17.813155  739787 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0920 19:48:17.828502  739787 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0920 19:48:17.828584  739787 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0920 19:48:17.828602  739787 cli_runner.go:164] Run: docker network inspect existing-network
W0920 19:48:17.844203  739787 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0920 19:48:17.844236  739787 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0920 19:48:17.844254  739787 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0920 19:48:17.844357  739787 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0920 19:48:17.860835  739787 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ed49c39a7360 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:9e:dc:a0:2e} reservation:<nil>}
I0920 19:48:17.861261  739787 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017853c0}
I0920 19:48:17.861284  739787 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0920 19:48:17.861351  739787 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0920 19:48:17.932495  739787 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-518769 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-518769 --network=existing-network: (31.451397941s)
helpers_test.go:175: Cleaning up "existing-network-518769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-518769
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-518769: (2.006718004s)
I0920 19:48:51.406985  739787 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (33.61s)

                                                
                                    
x
+
TestKicCustomSubnet (34.06s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-772731 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-772731 --subnet=192.168.60.0/24: (31.930902468s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-772731 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-772731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-772731
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-772731: (2.099751831s)
--- PASS: TestKicCustomSubnet (34.06s)

                                                
                                    
x
+
TestKicStaticIP (34.88s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-562636 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-562636 --static-ip=192.168.200.200: (32.421101715s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-562636 ip
helpers_test.go:175: Cleaning up "static-ip-562636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-562636
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-562636: (2.300027696s)
--- PASS: TestKicStaticIP (34.88s)

                                                
                                    
x
+
TestMainNoArgs (0.13s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.13s)

                                                
                                    
x
+
TestMinikubeProfile (65.74s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-863621 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-863621 --driver=docker  --container-runtime=containerd: (29.800178176s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-866911 --driver=docker  --container-runtime=containerd
E0920 19:50:39.842761  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-866911 --driver=docker  --container-runtime=containerd: (30.454688998s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-863621
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-866911
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-866911" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-866911
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-866911: (2.106446937s)
helpers_test.go:175: Cleaning up "first-863621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-863621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-863621: (1.995864859s)
--- PASS: TestMinikubeProfile (65.74s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.75s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-572479 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-572479 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.751976351s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-572479 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.07s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-574555 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-574555 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.070673321s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-574555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-572479 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-572479 --alsologtostderr -v=5: (1.654013282s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-574555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-574555
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-574555: (1.193907131s)
--- PASS: TestMountStart/serial/Stop (1.19s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.73s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-574555
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-574555: (6.732409457s)
--- PASS: TestMountStart/serial/RestartStopped (7.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-574555 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (69.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-828222 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 19:51:44.178040  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-828222 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.646749711s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (69.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (20.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-828222 -- rollout status deployment/busybox: (18.919492632s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-7fbss -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-w57lv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-7fbss -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-w57lv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-7fbss -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-w57lv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (20.89s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-7fbss -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-7fbss -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-w57lv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-828222 -- exec busybox-7dff88458-w57lv -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-828222 -v 3 --alsologtostderr
E0920 19:53:07.242492  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-828222 -v 3 --alsologtostderr: (16.018830198s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.70s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-828222 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp testdata/cp-test.txt multinode-828222:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1690841620/001/cp-test_multinode-828222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222:/home/docker/cp-test.txt multinode-828222-m02:/home/docker/cp-test_multinode-828222_multinode-828222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test_multinode-828222_multinode-828222-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222:/home/docker/cp-test.txt multinode-828222-m03:/home/docker/cp-test_multinode-828222_multinode-828222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test_multinode-828222_multinode-828222-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp testdata/cp-test.txt multinode-828222-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1690841620/001/cp-test_multinode-828222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m02:/home/docker/cp-test.txt multinode-828222:/home/docker/cp-test_multinode-828222-m02_multinode-828222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test_multinode-828222-m02_multinode-828222.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m02:/home/docker/cp-test.txt multinode-828222-m03:/home/docker/cp-test_multinode-828222-m02_multinode-828222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test_multinode-828222-m02_multinode-828222-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp testdata/cp-test.txt multinode-828222-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1690841620/001/cp-test_multinode-828222-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m03:/home/docker/cp-test.txt multinode-828222:/home/docker/cp-test_multinode-828222-m03_multinode-828222.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222 "sudo cat /home/docker/cp-test_multinode-828222-m03_multinode-828222.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 cp multinode-828222-m03:/home/docker/cp-test.txt multinode-828222-m02:/home/docker/cp-test_multinode-828222-m03_multinode-828222-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 ssh -n multinode-828222-m02 "sudo cat /home/docker/cp-test_multinode-828222-m03_multinode-828222-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-828222 node stop m03: (1.254727715s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-828222 status: exit status 7 (563.03703ms)

                                                
                                                
-- stdout --
	multinode-828222
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-828222-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-828222-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr: exit status 7 (520.557528ms)

                                                
                                                
-- stdout --
	multinode-828222
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-828222-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-828222-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:53:36.243835  860474 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:53:36.244113  860474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:53:36.244129  860474 out.go:358] Setting ErrFile to fd 2...
	I0920 19:53:36.244136  860474 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:53:36.244406  860474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:53:36.244608  860474 out.go:352] Setting JSON to false
	I0920 19:53:36.244643  860474 mustload.go:65] Loading cluster: multinode-828222
	I0920 19:53:36.245105  860474 config.go:182] Loaded profile config "multinode-828222": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:53:36.245130  860474 status.go:174] checking status of multinode-828222 ...
	I0920 19:53:36.245704  860474 cli_runner.go:164] Run: docker container inspect multinode-828222 --format={{.State.Status}}
	I0920 19:53:36.246368  860474 notify.go:220] Checking for updates...
	I0920 19:53:36.263938  860474 status.go:364] multinode-828222 host status = "Running" (err=<nil>)
	I0920 19:53:36.263963  860474 host.go:66] Checking if "multinode-828222" exists ...
	I0920 19:53:36.264306  860474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-828222
	I0920 19:53:36.294288  860474 host.go:66] Checking if "multinode-828222" exists ...
	I0920 19:53:36.294662  860474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:53:36.294717  860474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-828222
	I0920 19:53:36.312476  860474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/multinode-828222/id_rsa Username:docker}
	I0920 19:53:36.411530  860474 ssh_runner.go:195] Run: systemctl --version
	I0920 19:53:36.415883  860474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:53:36.427593  860474 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 19:53:36.477507  860474 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-20 19:53:36.467365999 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 19:53:36.478119  860474 kubeconfig.go:125] found "multinode-828222" server: "https://192.168.67.2:8443"
	I0920 19:53:36.478154  860474 api_server.go:166] Checking apiserver status ...
	I0920 19:53:36.478198  860474 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0920 19:53:36.489606  860474 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1450/cgroup
	I0920 19:53:36.499929  860474 api_server.go:182] apiserver freezer: "5:freezer:/docker/f24ad9e4a04269e2f1e55896fdec4994b32cf325f138fcea355892f4743c99b9/kubepods/burstable/podc7ae83390434e4047862dc0c6bc981c1/64b1a0308eea0af1c100895b3483170824cc540c405f2f16a74b673e7acc26b4"
	I0920 19:53:36.500005  860474 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f24ad9e4a04269e2f1e55896fdec4994b32cf325f138fcea355892f4743c99b9/kubepods/burstable/podc7ae83390434e4047862dc0c6bc981c1/64b1a0308eea0af1c100895b3483170824cc540c405f2f16a74b673e7acc26b4/freezer.state
	I0920 19:53:36.509021  860474 api_server.go:204] freezer state: "THAWED"
	I0920 19:53:36.509063  860474 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0920 19:53:36.516677  860474 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0920 19:53:36.516710  860474 status.go:456] multinode-828222 apiserver status = Running (err=<nil>)
	I0920 19:53:36.516725  860474 status.go:176] multinode-828222 status: &{Name:multinode-828222 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:53:36.516742  860474 status.go:174] checking status of multinode-828222-m02 ...
	I0920 19:53:36.517066  860474 cli_runner.go:164] Run: docker container inspect multinode-828222-m02 --format={{.State.Status}}
	I0920 19:53:36.534046  860474 status.go:364] multinode-828222-m02 host status = "Running" (err=<nil>)
	I0920 19:53:36.534075  860474 host.go:66] Checking if "multinode-828222-m02" exists ...
	I0920 19:53:36.534451  860474 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-828222-m02
	I0920 19:53:36.554212  860474 host.go:66] Checking if "multinode-828222-m02" exists ...
	I0920 19:53:36.554677  860474 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0920 19:53:36.554745  860474 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-828222-m02
	I0920 19:53:36.573007  860474 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19678-734403/.minikube/machines/multinode-828222-m02/id_rsa Username:docker}
	I0920 19:53:36.675478  860474 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0920 19:53:36.687535  860474 status.go:176] multinode-828222-m02 status: &{Name:multinode-828222-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:53:36.687582  860474 status.go:174] checking status of multinode-828222-m03 ...
	I0920 19:53:36.687903  860474 cli_runner.go:164] Run: docker container inspect multinode-828222-m03 --format={{.State.Status}}
	I0920 19:53:36.706062  860474 status.go:364] multinode-828222-m03 host status = "Stopped" (err=<nil>)
	I0920 19:53:36.706094  860474 status.go:377] host is not running, skipping remaining checks
	I0920 19:53:36.706102  860474 status.go:176] multinode-828222-m03 status: &{Name:multinode-828222-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-828222 node start m03 -v=7 --alsologtostderr: (8.96562274s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-828222
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-828222
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-828222: (25.079219948s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-828222 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-828222 --wait=true -v=8 --alsologtostderr: (55.89290868s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-828222
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.11s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-828222 node delete m03: (4.594295198s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.26s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-828222 stop: (23.91820826s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-828222 status: exit status 7 (101.041817ms)

                                                
                                                
-- stdout --
	multinode-828222
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-828222-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr: exit status 7 (97.637006ms)

                                                
                                                
-- stdout --
	multinode-828222
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-828222-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 19:55:36.912876  868521 out.go:345] Setting OutFile to fd 1 ...
	I0920 19:55:36.913044  868521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:55:36.913055  868521 out.go:358] Setting ErrFile to fd 2...
	I0920 19:55:36.913061  868521 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 19:55:36.913286  868521 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 19:55:36.913461  868521 out.go:352] Setting JSON to false
	I0920 19:55:36.913500  868521 mustload.go:65] Loading cluster: multinode-828222
	I0920 19:55:36.913539  868521 notify.go:220] Checking for updates...
	I0920 19:55:36.913919  868521 config.go:182] Loaded profile config "multinode-828222": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 19:55:36.913934  868521 status.go:174] checking status of multinode-828222 ...
	I0920 19:55:36.914569  868521 cli_runner.go:164] Run: docker container inspect multinode-828222 --format={{.State.Status}}
	I0920 19:55:36.932998  868521 status.go:364] multinode-828222 host status = "Stopped" (err=<nil>)
	I0920 19:55:36.933022  868521 status.go:377] host is not running, skipping remaining checks
	I0920 19:55:36.933030  868521 status.go:176] multinode-828222 status: &{Name:multinode-828222 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0920 19:55:36.933064  868521 status.go:174] checking status of multinode-828222-m02 ...
	I0920 19:55:36.933376  868521 cli_runner.go:164] Run: docker container inspect multinode-828222-m02 --format={{.State.Status}}
	I0920 19:55:36.957067  868521 status.go:364] multinode-828222-m02 host status = "Stopped" (err=<nil>)
	I0920 19:55:36.957095  868521 status.go:377] host is not running, skipping remaining checks
	I0920 19:55:36.957103  868521 status.go:176] multinode-828222-m02 status: &{Name:multinode-828222-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-828222 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0920 19:55:39.839183  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-828222 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (53.868910375s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-828222 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-828222
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-828222-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-828222-m02 --driver=docker  --container-runtime=containerd: exit status 14 (82.720676ms)

                                                
                                                
-- stdout --
	* [multinode-828222-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-828222-m02' is duplicated with machine name 'multinode-828222-m02' in profile 'multinode-828222'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-828222-m03 --driver=docker  --container-runtime=containerd
E0920 19:56:44.178597  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 19:57:02.903770  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-828222-m03 --driver=docker  --container-runtime=containerd: (31.401211236s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-828222
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-828222: exit status 80 (325.423345ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-828222 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-828222-m03 already exists in multinode-828222-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-828222-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-828222-m03: (1.942801676s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.81s)

                                                
                                    
x
+
TestPreload (113.73s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-512663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-512663 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m14.203682175s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-512663 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-512663 image pull gcr.io/k8s-minikube/busybox: (2.0237691s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-512663
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-512663: (12.04625514s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-512663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-512663 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.263521774s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-512663 image list
helpers_test.go:175: Cleaning up "test-preload-512663" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-512663
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-512663: (2.515344496s)
--- PASS: TestPreload (113.73s)

                                                
                                    
x
+
TestScheduledStopUnix (109.29s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-296573 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-296573 --memory=2048 --driver=docker  --container-runtime=containerd: (32.85832443s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-296573 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-296573 -n scheduled-stop-296573
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-296573 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0920 19:59:36.419719  739787 retry.go:31] will retry after 117.135µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.420876  739787 retry.go:31] will retry after 211.145µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.421256  739787 retry.go:31] will retry after 135.067µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.421964  739787 retry.go:31] will retry after 385.559µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.423084  739787 retry.go:31] will retry after 552.25µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.424173  739787 retry.go:31] will retry after 705.059µs: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.425294  739787 retry.go:31] will retry after 1.509351ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.427588  739787 retry.go:31] will retry after 1.94808ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.429827  739787 retry.go:31] will retry after 3.611445ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.434103  739787 retry.go:31] will retry after 3.216768ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.438414  739787 retry.go:31] will retry after 8.291385ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.447678  739787 retry.go:31] will retry after 8.260171ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.456980  739787 retry.go:31] will retry after 10.266297ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.469555  739787 retry.go:31] will retry after 26.171559ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
I0920 19:59:36.496673  739787 retry.go:31] will retry after 39.58629ms: open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/scheduled-stop-296573/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-296573 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-296573 -n scheduled-stop-296573
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-296573
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-296573 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0920 20:00:39.841508  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-296573
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-296573: exit status 7 (70.669001ms)

                                                
                                                
-- stdout --
	scheduled-stop-296573
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-296573 -n scheduled-stop-296573
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-296573 -n scheduled-stop-296573: exit status 7 (64.61959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-296573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-296573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-296573: (4.690484326s)
--- PASS: TestScheduledStopUnix (109.29s)

                                                
                                    
x
+
TestInsufficientStorage (11.51s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-413247 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-413247 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.959779644s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"41b9b578-6110-4341-a91d-f47c41fccda4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-413247] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d5f7dc56-e2cc-4bda-be58-389cf9829ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19678"}}
	{"specversion":"1.0","id":"9802a45d-2f56-492c-80d5-b7ca26567940","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8d3b11a7-b839-4064-aa8d-7be267edc5e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig"}}
	{"specversion":"1.0","id":"7e022d8a-a5d0-40ed-b0e0-2c7e123b1f15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube"}}
	{"specversion":"1.0","id":"976dc8c4-99c6-4d77-8395-01b04430bd18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"32689db8-0d98-47c9-aaeb-c4af94d52318","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6517a8a6-dfd8-4f4e-86c8-551bc3b2b2ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"05d9f3e3-685b-4ab7-95d7-b44b384fa7d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cb1d7ccc-ec51-4b64-8350-3949513cb1ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8d10b62e-b3f3-4622-b9a3-a3fd2ab4fac9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"41b404cc-fe32-4a87-a171-faa108939dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-413247\" primary control-plane node in \"insufficient-storage-413247\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"26a8fa65-705d-4951-b003-62bc1940225d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"fc8727d5-11be-44f1-86fd-f8cc74b4efd9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"702813f6-67ba-4cdd-a9b2-88486bdf6e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-413247 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-413247 --output=json --layout=cluster: exit status 7 (322.750582ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-413247","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-413247","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:01:01.611323  886872 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-413247" does not appear in /home/jenkins/minikube-integration/19678-734403/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-413247 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-413247 --output=json --layout=cluster: exit status 7 (295.993285ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-413247","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-413247","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0920 20:01:01.908016  886933 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-413247" does not appear in /home/jenkins/minikube-integration/19678-734403/kubeconfig
	E0920 20:01:01.918464  886933 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/insufficient-storage-413247/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-413247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-413247
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-413247: (1.934618998s)
--- PASS: TestInsufficientStorage (11.51s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (85.52s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3964322242 start -p running-upgrade-581708 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3964322242 start -p running-upgrade-581708 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.27541394s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-581708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0920 20:06:44.178759  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-581708 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.503859696s)
helpers_test.go:175: Cleaning up "running-upgrade-581708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-581708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-581708: (2.824896297s)
--- PASS: TestRunningBinaryUpgrade (85.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.16s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m0.703311844s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-642200
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-642200: (1.319456131s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-642200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-642200 status --format={{.Host}}: exit status 7 (89.781335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.864137706s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-642200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (154.123399ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-642200] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-642200
	    minikube start -p kubernetes-upgrade-642200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6422002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-642200 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-642200 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.654835241s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-642200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-642200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-642200: (2.213401049s)
--- PASS: TestKubernetesUpgrade (354.16s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3098565603 start -p missing-upgrade-458150 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3098565603 start -p missing-upgrade-458150 --memory=2200 --driver=docker  --container-runtime=containerd: (1m37.200288854s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-458150
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-458150
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-458150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-458150 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.740594875s)
helpers_test.go:175: Cleaning up "missing-upgrade-458150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-458150
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-458150: (2.281223718s)
--- PASS: TestMissingContainerUpgrade (167.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (88.665666ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-881953] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881953 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881953 --driver=docker  --container-runtime=containerd: (40.140726756s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-881953 status -o json
E0920 20:01:44.178661  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.479398236s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-881953 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-881953 status -o json: exit status 2 (413.992142ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-881953","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-881953
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-881953: (4.476116238s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881953 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.630324861s)
--- PASS: TestNoKubernetes/serial/Start (6.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-881953 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-881953 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.359683ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-881953
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-881953: (1.209746375s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-881953 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-881953 --driver=docker  --container-runtime=containerd: (6.386283887s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-881953 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-881953 "sudo systemctl is-active --quiet service kubelet": exit status 1 (268.185694ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (116.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3583872320 start -p stopped-upgrade-085600 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3583872320 start -p stopped-upgrade-085600 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.271831027s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3583872320 -p stopped-upgrade-085600 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3583872320 -p stopped-upgrade-085600 stop: (20.098880399s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-085600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0920 20:05:39.839121  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-085600 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.644381316s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (116.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-085600
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-085600: (1.251558364s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.25s)

                                                
                                    
x
+
TestPause/serial/Start (95.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714935 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-714935 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m35.734299597s)
--- PASS: TestPause/serial/Start (95.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-417426 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-417426 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (170.435996ms)

                                                
                                                
-- stdout --
	* [false-417426] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19678
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0920 20:08:48.818812  927143 out.go:345] Setting OutFile to fd 1 ...
	I0920 20:08:48.818996  927143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:08:48.819026  927143 out.go:358] Setting ErrFile to fd 2...
	I0920 20:08:48.819048  927143 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0920 20:08:48.819330  927143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19678-734403/.minikube/bin
	I0920 20:08:48.819825  927143 out.go:352] Setting JSON to false
	I0920 20:08:48.820863  927143 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":13880,"bootTime":1726849049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0920 20:08:48.820978  927143 start.go:139] virtualization:  
	I0920 20:08:48.823631  927143 out.go:177] * [false-417426] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0920 20:08:48.826117  927143 out.go:177]   - MINIKUBE_LOCATION=19678
	I0920 20:08:48.826183  927143 notify.go:220] Checking for updates...
	I0920 20:08:48.830465  927143 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0920 20:08:48.832317  927143 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19678-734403/kubeconfig
	I0920 20:08:48.834400  927143 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19678-734403/.minikube
	I0920 20:08:48.836640  927143 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0920 20:08:48.838662  927143 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0920 20:08:48.841164  927143 config.go:182] Loaded profile config "pause-714935": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0920 20:08:48.841335  927143 driver.go:394] Setting default libvirt URI to qemu:///system
	I0920 20:08:48.872037  927143 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0920 20:08:48.872163  927143 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0920 20:08:48.926871  927143 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-20 20:08:48.915900567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0920 20:08:48.926990  927143 docker.go:318] overlay module found
	I0920 20:08:48.929101  927143 out.go:177] * Using the docker driver based on user configuration
	I0920 20:08:48.930924  927143 start.go:297] selected driver: docker
	I0920 20:08:48.930947  927143 start.go:901] validating driver "docker" against <nil>
	I0920 20:08:48.930963  927143 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0920 20:08:48.933554  927143 out.go:201] 
	W0920 20:08:48.935586  927143 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0920 20:08:48.937389  927143 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-417426 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-714935
contexts:
- context:
cluster: pause-714935
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-714935
name: pause-714935
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-714935
user:
client-certificate: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.crt
client-key: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-417426

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-417426"

                                                
                                                
----------------------- debugLogs end: false-417426 [took: 3.581766848s] --------------------------------
helpers_test.go:175: Cleaning up "false-417426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-417426
--- PASS: TestNetworkPlugins/group/false (3.99s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.35s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-714935 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-714935 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.33395975s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.35s)

                                                
                                    
x
+
TestPause/serial/Pause (1.31s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714935 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-714935 --alsologtostderr -v=5: (1.310385746s)
--- PASS: TestPause/serial/Pause (1.31s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.43s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-714935 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-714935 --output=json --layout=cluster: exit status 2 (428.496394ms)

                                                
                                                
-- stdout --
	{"Name":"pause-714935","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-714935","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.43s)

                                                
                                    
x
+
TestPause/serial/Unpause (1s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-714935 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-714935 --alsologtostderr -v=5: (1.002477538s)
--- PASS: TestPause/serial/Unpause (1.00s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-714935 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-714935 --alsologtostderr -v=5: (1.038851171s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.2s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-714935 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-714935 --alsologtostderr -v=5: (3.195094482s)
--- PASS: TestPause/serial/DeletePaused (3.20s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.5s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-714935
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-714935: exit status 1 (16.411523ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-714935: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-060703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0920 20:10:39.838828  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:11:44.178733  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-060703 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m47.122377986s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-268732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-268732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (55.669489635s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (55.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-060703 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [25101771-438f-4963-a971-cc942ca52d55] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [25101771-438f-4963-a971-cc942ca52d55] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004936135s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-060703 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.67s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-060703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-060703 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.3916046s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-060703 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-060703 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-060703 --alsologtostderr -v=3: (12.251774878s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-060703 -n old-k8s-version-060703
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-060703 -n old-k8s-version-060703: exit status 7 (73.207177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-060703 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-268732 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f09bb195-a73d-4052-8ed3-7c175c42e3bb] Pending
helpers_test.go:344: "busybox" [f09bb195-a73d-4052-8ed3-7c175c42e3bb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f09bb195-a73d-4052-8ed3-7c175c42e3bb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003409331s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-268732 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-268732 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-268732 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.071324477s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-268732 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-268732 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-268732 --alsologtostderr -v=3: (12.221137693s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732: exit status 7 (112.518067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-268732 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-268732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 20:15:39.839599  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:16:44.178822  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-268732 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m50.520847551s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (290.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x869n" [a6855bef-efd5-47e0-8ecb-5bf0a902a5c3] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003820765s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x869n" [a6855bef-efd5-47e0-8ecb-5bf0a902a5c3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005036336s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-268732 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-268732 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-268732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-268732 --alsologtostderr -v=1: (1.073936411s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732: exit status 2 (400.459582ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732: exit status 2 (430.933595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-268732 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-268732 -n default-k8s-diff-port-268732
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (54.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-975064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-975064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (54.570437191s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (54.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4ln9z" [4b501a55-8ad5-48b7-937f-832248b5536a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003622713s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4ln9z" [4b501a55-8ad5-48b7-937f-832248b5536a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005982257s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-060703 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-060703 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-060703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-060703 --alsologtostderr -v=1: (1.23779932s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-060703 -n old-k8s-version-060703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-060703 -n old-k8s-version-060703: exit status 2 (596.614701ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-060703 -n old-k8s-version-060703
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-060703 -n old-k8s-version-060703: exit status 2 (372.390546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-060703 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-060703 -n old-k8s-version-060703
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-060703 -n old-k8s-version-060703
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (62.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-043373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-043373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m2.88335771s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (62.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-975064 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [912fa471-c8f5-4a2c-b30a-62c71e989b6c] Pending
helpers_test.go:344: "busybox" [912fa471-c8f5-4a2c-b30a-62c71e989b6c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [912fa471-c8f5-4a2c-b30a-62c71e989b6c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004592071s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-975064 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-975064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-975064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.438243135s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-975064 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-975064 --alsologtostderr -v=3
E0920 20:20:39.839521  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-975064 --alsologtostderr -v=3: (12.28602587s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-975064 -n embed-certs-975064
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-975064 -n embed-certs-975064: exit status 7 (114.786148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-975064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (280.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-975064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-975064 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m39.935850148s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-975064 -n embed-certs-975064
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (280.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-043373 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [849ef53f-a24e-40c6-981c-d39342b0a8c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [849ef53f-a24e-40c6-981c-d39342b0a8c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003064021s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-043373 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-043373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-043373 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.095514295s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-043373 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-043373 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-043373 --alsologtostderr -v=3: (12.141177613s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043373 -n no-preload-043373
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043373 -n no-preload-043373: exit status 7 (93.10552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-043373 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (271.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-043373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 20:21:44.178198  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.505356  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.511920  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.523365  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.544933  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.586373  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.667827  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:06.829341  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:07.151036  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:07.792594  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:09.074384  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:11.635817  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:16.757842  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:26.999511  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:47.481018  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:52.960119  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:52.966548  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:52.978186  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:52.999795  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:53.041293  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:53.122801  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:53.284299  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:53.606063  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:54.248120  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:55.529830  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:23:58.091685  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:24:03.212973  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:24:13.455267  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:24:28.442502  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:24:33.936955  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:25:14.898574  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-043373 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m30.997511713s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-043373 -n no-preload-043373
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (271.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9sq7x" [5e9b682c-eff3-4a67-a9f2-5b166daec1f2] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.009544312s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-9sq7x" [5e9b682c-eff3-4a67-a9f2-5b166daec1f2] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006220553s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-975064 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-975064 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-975064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-975064 -n embed-certs-975064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-975064 -n embed-certs-975064: exit status 2 (323.220537ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-975064 -n embed-certs-975064
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-975064 -n embed-certs-975064: exit status 2 (334.73378ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-975064 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-975064 -n embed-certs-975064
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-975064 -n embed-certs-975064
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-640017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0920 20:25:50.364444  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-640017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (36.867031236s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4j8kp" [3aabd2d0-44b2-4d43-b5d9-044d289425ac] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003920966s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-4j8kp" [3aabd2d0-44b2-4d43-b5d9-044d289425ac] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004618188s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-043373 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-043373 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (4.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-043373 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-043373 --alsologtostderr -v=1: (1.043312167s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043373 -n no-preload-043373
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043373 -n no-preload-043373: exit status 2 (493.815524ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043373 -n no-preload-043373
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043373 -n no-preload-043373: exit status 2 (516.675979ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-043373 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p no-preload-043373 --alsologtostderr -v=1: (1.275144254s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-043373 -n no-preload-043373
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-043373 -n no-preload-043373
--- PASS: TestStartStop/group/no-preload/serial/Pause (4.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-640017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-640017 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.758205504s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.72s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-640017 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-640017 --alsologtostderr -v=3: (1.722319072s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.72s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-640017 -n newest-cni-640017
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-640017 -n newest-cni-640017: exit status 7 (86.700868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-640017 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (21.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-640017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-640017 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (21.366313526s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-640017 -n newest-cni-640017
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (21.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (97.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0920 20:26:27.247121  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:26:36.820451  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:26:44.177851  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/addons-388835/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m37.257270704s)
--- PASS: TestNetworkPlugins/group/auto/Start (97.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-640017 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-640017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-640017 -n newest-cni-640017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-640017 -n newest-cni-640017: exit status 2 (370.344136ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-640017 -n newest-cni-640017
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-640017 -n newest-cni-640017: exit status 2 (399.567242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-640017 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-640017 -n newest-cni-640017
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-640017 -n newest-cni-640017
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.66s)
E0920 20:32:31.970889  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (55.909403625s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qqzw7" [7680bdc5-bbe9-44ae-bb41-6fe7351ff3f9] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00422059s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-417426 "pgrep -a kubelet"
I0920 20:27:53.944207  739787 config.go:182] Loaded profile config "kindnet-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nmnqk" [527edc81-eb78-402e-ab53-83ba644cd368] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nmnqk" [527edc81-eb78-402e-ab53-83ba644cd368] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004097969s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-417426 "pgrep -a kubelet"
I0920 20:28:01.002749  739787 config.go:182] Loaded profile config "auto-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gl2v6" [9742129f-bd2d-4443-9c81-4f6640c72bd5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-gl2v6" [9742129f-bd2d-4443-9c81-4f6640c72bd5] Running
E0920 20:28:06.505259  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/old-k8s-version-060703/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004491163s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m12.860915694s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (54.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0920 20:28:52.960408  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:29:20.661793  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/default-k8s-diff-port-268732/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (54.89617188s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (54.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-417426 "pgrep -a kubelet"
I0920 20:29:30.763137  739787 config.go:182] Loaded profile config "custom-flannel-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l5mzn" [2c7511f9-2a61-492a-9924-3517899b2f91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l5mzn" [2c7511f9-2a61-492a-9924-3517899b2f91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.005034989s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gz8zn" [e40d58de-5afa-42fa-b636-99bc667c6b47] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005901459s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-417426 "pgrep -a kubelet"
I0920 20:29:47.000392  739787 config.go:182] Loaded profile config "calico-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jb67h" [41db392d-6069-4558-9456-be8520859f71] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-jb67h" [41db392d-6069-4558-9456-be8520859f71] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005770122s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (86.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m26.951496788s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (86.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0920 20:30:39.839606  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/functional-353629/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.031506  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.038035  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.049897  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.071899  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.113410  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.194981  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.356659  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:10.678428  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:11.319779  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:12.601495  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:15.163174  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:31:20.285212  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (56.869017258s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-lqghz" [4cb7174c-1ca6-43ff-ad0a-b7282a7d2e23] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004896068s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-417426 "pgrep -a kubelet"
I0920 20:31:26.813401  739787 config.go:182] Loaded profile config "flannel-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vhb8d" [5dc0055a-a940-4ce3-9303-e1b43feff49b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vhb8d" [5dc0055a-a940-4ce3-9303-e1b43feff49b] Running
E0920 20:31:30.527197  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/no-preload-043373/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.00389158s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-417426 "pgrep -a kubelet"
I0920 20:31:31.694824  739787 config.go:182] Loaded profile config "enable-default-cni-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ms5fv" [e78bb2ff-3721-4a78-aca5-fb7f45c03df3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ms5fv" [e78bb2ff-3721-4a78-aca5-fb7f45c03df3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004409676s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (42.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-417426 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (42.096556655s)
--- PASS: TestNetworkPlugins/group/bridge/Start (42.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-417426 "pgrep -a kubelet"
I0920 20:32:44.196865  739787 config.go:182] Loaded profile config "bridge-417426": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-417426 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nvnrm" [799cc7ac-2378-4a9a-ad7d-c19a9cb92520] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0920 20:32:47.644375  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.650827  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.662241  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.683641  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.725198  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.806921  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:47.968415  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:48.290210  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-nvnrm" [799cc7ac-2378-4a9a-ad7d-c19a9cb92520] Running
E0920 20:32:48.931579  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:50.212968  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
E0920 20:32:52.774553  739787 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/kindnet-417426/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004359835s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-417426 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-417426 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-973006 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-973006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-973006
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-635447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-635447
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-417426 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-714935
contexts:
- context:
cluster: pause-714935
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-714935
name: pause-714935
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-714935
user:
client-certificate: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.crt
client-key: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-417426

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-417426"

                                                
                                                
----------------------- debugLogs end: kubenet-417426 [took: 3.650370642s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-417426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-417426
--- SKIP: TestNetworkPlugins/group/kubenet (3.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-417426 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-417426" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19678-734403/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-714935
contexts:
- context:
cluster: pause-714935
extensions:
- extension:
last-update: Fri, 20 Sep 2024 20:07:57 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-714935
name: pause-714935
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-714935
user:
client-certificate: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.crt
client-key: /home/jenkins/minikube-integration/19678-734403/.minikube/profiles/pause-714935/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-417426

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-417426" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-417426"

                                                
                                                
----------------------- debugLogs end: cilium-417426 [took: 4.93995446s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-417426" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-417426
--- SKIP: TestNetworkPlugins/group/cilium (5.12s)

                                                
                                    
Copied to clipboard